- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Transactions
Joined: 29-Oct-2004
Its a personal preference, but i am thinking of doing all saves to my database in transactions. The reason being that I like to write key changes to tables to additional log tables.
My questions are this?
-
Do transactions add a large overhead to performance that means I should only use them when absolutely necessary, or is it perfectly feasible to use transactions everywhere.
-
If I save an entity as part of a transaction and this triggers another save of another entity (perhaps a log file) and this triggers another save etc etc etc... Do I have to ensure that I use the same DataAccessAdapter (the one that I started the stransaction on) to perform all of the saves in the transaction so that it can be rolled back? Also, these sub saves will also trigger their own transactions - is that allowed?
-
I use the IEntityValidator classes to validate my entites prior to saving - some of the validations may require that I retrieve some data from the database first (I am thinking here of for example checking that an entitiy field value is unique for example prior to actually saving it). Can I open a second DataAccessAdapter to do checks like this without messing up the overall transaction?
Thanks in advance
hplloyd wrote:
Its a personal preference, but i am thinking of doing all saves to my database in transactions. The reason being that I like to write key changes to tables to additional log tables.
My questions are this?
- Do transactions add a large overhead to performance that means I should only use them when absolutely necessary, or is it perfectly feasible to use transactions everywhere.
'depends' . When you run a transaction, the RDBMS has to keep track of all the actions performed so they can be rolled back. On some databases, transactions also lock rows exclusively, which means that other threads can't read the row at that point until the transaction is finished.
So there is indeed overhead, the longer the transaction, the more overhead. Though also a lot of tiny transactions can make a lot of overhead. That doesn't mean you shouldn't use them, but consider this: a single entity save will result in either an INSERT or an UPDATE statement, which by itself is already a transaction. So saving a single entity doesn't have to be run inside a transaction.
- If I save an entity as part of a transaction and this triggers another save of another entity (perhaps a log file) and this triggers another save etc etc etc... Do I have to ensure that I use the same DataAccessAdapter (the one that I started the stransaction on) to perform all of the saves in the transaction so that it can be rolled back? Also, these sub saves will also trigger their own transactions - is that allowed?
You should use the same adapter indeed, as when you use a new one will create a new connection and will block until the other transaction is finished, which creates a deadlock (on SqlServer that is). You can check if there is already a transaction in progress on an adapter: DataAccessAdapter.IsTransactionInProgress. If so, you shouldn't start a new transaction, as that will cause an exception to be thrown. Saves using the same adapter which has already a transaction in progress will automatically participate in that transaction.
- I use the IEntityValidator classes to validate my entites prior to saving - some of the validations may require that I retrieve some data from the database first (I am thinking here of for example checking that an entitiy field value is unique for example prior to actually saving it). Can I open a second DataAccessAdapter to do checks like this without messing up the overall transaction?
If you're using oracle, it's perfectly fine. If you're using sqlserver however, your reads should use a filter which uses fields which have an index and you even then run the risk of running into a record which has been saved and exclusively locked and which thus can't be read.
Joined: 29-Oct-2004
Thanks for all you help that was very useful.... I agree that I will only use transactions where there is more than one table affected by a save, and understand now that when I do use a transaction it will all need to use the same adapter.
No problem have now done all that and it works great. The only thing I am not perfectly clear on is the IEntityValidator issues. You say:
If you're using oracle, it's perfectly fine. If you're using sqlserver however, your reads should use a filter which uses fields which have an index and you even then run the risk of running into a record which has been saved and exclusively locked and which thus can't be read.
I am using SQL Server - So does this mean that as long as
- my validations that need to access the database are run before and saves are made in the transaction
- the validations do not themselves cause any saves to the database.
- I make sure that it is only the validation of the first step in the transaction (the main save) that is able to access the database (to check for uniqueness as an example). The validation for subsequent steps (adding a record to the log file for example) do not gain access to the database otherwise this would conflict with point 1.
Would that mean that I should not get the locking issue you discuss. If so I can probably live with that because accessing the database as part of validation should be a fairly rare occurence, and appart from checking for uniqueness of a certain field, I cannot see at this stage what other things I would need to do in the validation that might require a query to the database.
hplloyd wrote:
No problem have now done all that and it works great. The only thing I am not perfectly clear on is the IEntityValidator issues. You say:
If you're using oracle, it's perfectly fine. If you're using sqlserver however, your reads should use a filter which uses fields which have an index and you even then run the risk of running into a record which has been saved and exclusively locked and which thus can't be read.
I am using SQL Server - So does this mean that as long as
- my validations that need to access the database are run before and saves are made in the transaction
- the validations do not themselves cause any saves to the database.
- I make sure that it is only the validation of the first step in the transaction (the main save) that is able to access the database (to check for uniqueness as an example). The validation for subsequent steps (adding a record to the log file for example) do not gain access to the database otherwise this would conflict with point 1.
Would that mean that I should not get the locking issue you discuss. If so I can probably live with that because accessing the database as part of validation should be a fairly rare occurence, and appart from checking for uniqueness of a certain field, I cannot see at this stage what other things I would need to do in the validation that might require a query to the database.
As long as your entity validator objects use the same adapter to read data, it's ok. If they create a new adapter, they will get a new connection, which will block on record reads which are affected by the ongoing transaction.
Another way to work around this is by starting the transaction manually (adapter.StartTransaction() ) and specify readUncommitted as the transaction isolation level.