transaction handling problem

Posts   
 
    
tvoss avatar
tvoss
User
Posts: 192
Joined: 07-Dec-2003
# Posted on: 17-Dec-2003 20:45:30   

With this transaction:


Dim transactionManager As New Transaction(IsolationLevel.ReadCommitted, "GetNextAvailableTask")
Try
    tasks = New TaskCollection
    transactionManager.Add(tasks)
    Dim cSort As ISortExpression = New SortExpression(SortClauseFactory.Create(TaskFieldIndex.Priority, SortOperator.Ascending))
    Dim cFilter As IPredicateExpression = New PredicateExpression(PredicateFactory.CompareValue(TaskFieldIndex.Completed, ComparisonOperator.Equal, False))
    cFilter.AddWithAnd(PredicateFactory.CompareValue(TaskFieldIndex.Out, ComparisonOperator.Equal, False))
    cFilter.AddWithAnd(PredicateFactory.CompareValue(TaskFieldIndex.EffectiveDate, ComparisonOperator.LessEqual, DateTime.Today))
    cFilter.AddWithAnd(PredicateFactory.CompareValue(TaskFieldIndex.ExpirationDate, ComparisonOperator.GreaterEqual, DateTime.Today))
    tasks.GetMulti(cFilter, 1, cSort)
    If tasks.Count = 1 Then
        tasks(0).Out = True
        tasks(0).Save()
    End If
    transactionManager.Commit()
Catch ex As Exception
    transactionManager.Rollback()
Finally
    transactionManager.Dispose()
End Try

I am getting two asynchronous clients getting the same task number issued to them (15 times out of 1000) which I hoped would not be possible with this transaction. I increased isolation level of the transaction to repeatableread, and serializable in turn and then with both the task(0).save would not execute which I am not sure why.

With the old stored procedure I got 0 repeats out of 1000. With the new transaction and code I get some repetitions of tasks getting completed by more than one client.

Any ideas here?

tvoss avatar
tvoss
User
Posts: 192
Joined: 07-Dec-2003
# Posted on: 17-Dec-2003 20:48:19   

I realized I should describe what the transaction is trying to do some more.

I try to find the next task that needs completion and issue it to one client only by locking upon selection and then updating the property out to true so as to checkout the task to that one client, and then no other client should be able to get that task to process.

Before I used a transaction to lock, all clients processed all 1,000 tasks giving 4,000 processing completions. Now I get like 1015 completions.

swallace
User
Posts: 648
Joined: 18-Aug-2003
# Posted on: 17-Dec-2003 21:58:43   

[Edit] Non-relevant suggestion deleted. Engaged fingers before mind.

tvoss avatar
tvoss
User
Posts: 192
Joined: 07-Dec-2003
# Posted on: 18-Dec-2003 05:36:43   

I solved the problem, but don't understand why if I went to higher isolationlevels with llblgenpro transactions, my tasks object that had been added to the transaction could not execute its save method within the transaction.

This same thing inside a stored procedure worked at higher isolationlevels no problem to allow the save within the sp.

I thought I had had no problem with sql server stored procedure locking in not allowing the same task to be completed twice, but I went back to stored procedures and even with isolationlevel = serializable, I found that some tasks got grabbed by two different clients.

Finally gave up on sql locking with multi-threads and went to using: synclock me

end synclock

around the code of interest and that solved all the problems. I guess with any shared resource of multi threads you must do that and sql locking will not suffice.

Let me know if you find anything on the first question in this message.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39927
Joined: 17-Aug-2003
# Posted on: 18-Dec-2003 09:57:38   

When you create a Transaction object, it opens a connection and creates an ADO.NET transaction with the isolation level submitted.

When you issue a READ action, that will not block anything, at least not with readcommitted, since the DB will hold shared locks on the read data. This means that multiple threads will read the same data. The second you change something, the row(s) changed are exclusively locked by the DB. This means that another connection can't read the data of these records, until the locks are removed. (unless the query has 'NOLOCK' specified, but the DQE never adds that). Because you first read the data, this can cause that multiple threads read the same data prior to the locking of the rows.

Reads do not create an exclusive lock on a resource, even in a serialized transaction (as far as I know understand SqlServer's docs). See SET TRANSACTION ISOLATION LEVEL documentation in Books Online.

So to prevent other threads from entering a critical section of your code (a section where you manipulate essential values) you have to use a locking mechanism which does lock the code, because you have a sequence of actions, started with a read (which does not lock the resource) and then an update based on what you read (which does lock the resource, but multiple threads can already have read the data).

Frans Bouma | Lead developer LLBLGen Pro
orenpeled
User
Posts: 53
Joined: 25-Jul-2005
# Posted on: 17-Jul-2006 15:35:02   

I'm having a similar problem, and I have an idea (tell me what you think).

In my products oracle-web-application, I have a field indicating whether the product is logically locked or not (since one product consists of several tables) - this field belongs to the main products table.

I want to: 1. Physically lock the main product's record when a user wants to manipulate it. 2. Read the lock field to see whether the product is already in use by someone else or not(if not, continue). 3. Set the lock field to be true (so that other users won't be able to manipulate the product). 4. Allow the user to perform the manipulation. 5. Set the lock field to be false. 6. Commit - after the commit action the physical lock will be removed.

But, ReadCommited and Serializable are not good enough for read-locking, so my idea is to use ReadCommited and then perform a dummy updation (between phase 1 and 2). This way, the record will be locked, and other users will have to wait until commit/rollback is performed.

What do you think? Too easy to be good?

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39927
Joined: 17-Aug-2003
# Posted on: 19-Jul-2006 12:04:45   

The question I have is: why would you need a physical lock if you already have a functional lock? Physical locks are not that scalable as in: if your transaction takes a long time (long as in: longer than a second) it could stall other operations, even on oracle.

Frans Bouma | Lead developer LLBLGen Pro
orenpeled
User
Posts: 53
Joined: 25-Jul-2005
# Posted on: 20-Jul-2006 16:00:23   

My application is web-based, web-services. Imagine there are 2 IIS servers on each the application is installed, in that case functional lock will not function. On the other hand, if I have DB lock - and I have only one DB server - there won't be any problems. I hope you got the picture

Walaa avatar
Walaa
Support Team
Posts: 14995
Joined: 21-Aug-2005
# Posted on: 20-Jul-2006 16:21:37   

Just a suggested:

I've faced this sceanrio before and our resolution was to use a client-server approach, putting our DAL and BL in the server side (implemented as a windows service), and having the WebServices as the client side.

So if you have different webservices on different IIS machines, they all will communicate with the Windows service which will control the locking functionalilty.