Query Timeout Exception

Posts   
 
    
Dodga
User
Posts: 3
Joined: 28-Sep-2010
# Posted on: 18-Aug-2011 10:45:17   

Hello LLBLGen Team,

i have a strange problem, where i am getting out of ideas what could be wrong or how i can solve this. I have to say that i am not a very expirienced .net developer (comming from MS Dynamics NAV), so pls bear with me if you shake your head about my code..

Ok back to my problem. In 70% of the time, the code is runnin as it should. If an error occurs and i restart the job then its working again, until i get the next error. This job runs once a day, but late in the evening so, so i would like to fix it...

Backend: SQL Server 2000 with a 600 GB Database (cant be upgraded atm) LLBLGen Version: 3.1.11.701 and i am using self servicing .Net 4.0 + TPL The table (WareHouseEntry) where i am thinking the problem occurs is has around 60 million entries (and raising). This table has an index of the fields i am selecting

I have added 3 files: Exception.txt: That shows the complete exception message DoLoopItem.txt: Thats where the parallel action starts and calls calculation of 1 item CalculateItem.cs: The Calculation itself

Maybe you can give me a hint, what i am doing wrong here, or what can be done to fix this time out

thx for your help Richard Sykora

Attachments
Filename File size Added on Approval
LLBLGen.zip 5,556 18-Aug-2011 10:47.35 Approved
Walaa avatar
Walaa
Support Team
Posts: 14995
Joined: 21-Aug-2005
# Posted on: 18-Aug-2011 11:54:12   

Timeouts will generally be caused by something in the database that you are trying to modify being locked by another transaction. You could update the CommandTimeout property of the DataAccessAdapter as described here but this is rarely the solution unless you genuinely have a long running query somewhere else which is locking the data.

We can't decide what's causing the timeout from over here.

Apparently this is where the timeout occurs:

        private decimal CalcZASum()
        {
            var query = from q in metaData.WarehouseEntry
                        where q.ItemNo_ == ItemNo && q.LocationCode == "C" && q.BinCode.StartsWith("ZA")
                        select q;
            var aaa = query.Sum(a => a.Quantity);
            return aaa;
        }

So you need to execute the generated SQL of this method against the database and see how long it takes. If it doesn't take too long, which I guess is the case, then definitly something is blocking it. So you'll have to do some profiling on the database, to see what's blocking the query.

Dodga
User
Posts: 3
Joined: 28-Sep-2010
# Posted on: 18-Aug-2011 14:01:29   

Since i am using self servicing, i use this method to modify the timeout value (Data.DaoClasses.CommonDaoBase.CommandTimeOut = 180; ), but you are right it doesnt solve my problem..

That the lock is comming from Navision itself is possible, since there are lot of new entries a day, but since i am just reading data from that table, is there a way to bypass the lock, for example if i use a view?

I really appreciate your help here

arschr
User
Posts: 894
Joined: 14-Dec-2003
# Posted on: 18-Aug-2011 17:59:43   

If you search the forum and documentation for nolock you'll find discussions on how to add this to your query. I don't know if it work for self servicing or if it is what you need to solve your problem; but that is what I would look at if I were you.

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 18-Aug-2011 21:03:30   

I also think that putting an SQL Profiler would help to know the real reason of the lock.

David Elizondo | LLBLGen Support Team
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39896
Joined: 17-Aug-2003
# Posted on: 19-Aug-2011 09:52:42   

Indeed the NOLOCK will perhaps help. The main issue is that if a transaction does updates with a table scan, every row read is read-locked. All selects from other connections won't be able to read till the transaction commits.

If your select does a table scan, it too adds read-locks which last till your connection is dropped. A nolock hint will prevent these locks from being set, so other queries won't be blocked.

What you can do is indeed look at execution plans by checking whether your queries all use indexes or not. Every where clause which is executed on that massive table in updates and selects or deletes has to be done through indexes, so all fields mentioned in those queries have to have indexes.

Caveat: indexes take maintenance time. This isn't much but with massive tables it might be less optimal to use indexes if you do a lot of inserts.

In that case, you might want to look into doing selects on a copy of the data, e.g. a replication of the data which is made once a day, if the data for the selects isn't needed to be 'live data'.

Frans Bouma | Lead developer LLBLGen Pro