Delayed Generic Save

Posts   
 
    
TazMizZaz
User
Posts: 37
Joined: 29-Apr-2009
# Posted on: 23-Jan-2010 08:03:23   

I have an idea I'm working out in my head. I have a database that over time has reduced speed due to feature enhancements and whatnot and now performing a save in the database can cause severe performance issues. Unfortunately, because the primary application that uses this database only works with one record at a time the issue isn't really a top priority fix. Which of course means work arounds simple_smile .

So when I have to save batches of records at a time I run into performance issues, namely my remote calls (approx 20 second timeout) end up timing out when I'm saving 4ish records or more at once (some of that is because the call is from a Pocket PC and they have a bunch of performance drain as well, different story/issue), this is a problem as I can often need to save dozens of records. My thoughts on this (since changing the database structure for these tables is strictly forbidden) is to create some type of delayed save. Essentially my remote calls would serialize the Entities to a record in a new simple table and then have a windows service come along later and perform the actual save.

So onto the questions.

1) How feasible does this sound? 2) Can the actual save be made generic, aka is there some way of deserializing the Entities and then re-saving them without really having to know the types? 3) Handling data changes since the sync occurred, this is unlikely (records are typically locked) but if it were possible (how??) I would like to offer unlocked records as a possible service in my extension application? 4) Anything else you can think of that would make this an absolute crapfest of coding?

Thanks.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39866
Joined: 17-Aug-2003
# Posted on: 23-Jan-2010 11:03:46   

If a database slows down over time when rows are inserted, it's a file problem: it's likely your database files are filled till capacity and the database has to increase size, which takes time. It also means that it's likely your database files are fragmented on disk. Also check if your transaction log files are huge. If so, you really have to set a backup schedule which will cause the truncation of the transaction log file.

Frans Bouma | Lead developer LLBLGen Pro
TazMizZaz
User
Posts: 37
Joined: 29-Apr-2009
# Posted on: 23-Jan-2010 18:32:30   

I actually know the problem. Its related to the 6+ triggers that execute and the dozens of huge stored procedures that those triggers execute. The functionality is needed for the table to work but makes each save take 3+ seconds on a fast machine. Which means that 4 saves = 3 x 4 = 12 seconds, etc. This is what I'm fighting against.

The database is consistently slow with a database of 60 MBs on a disk with 100s of GBs of space and no fragmentation or any other outside factor. Its a programming issue with the SQL in the DB.

The problem is its not my job to fix it, actually its especially not my job and I will get crushed if I try to fix it. So I need to make it appear like I have more speed than is really there. Which is why I came up with the delayed save idea.

I was thinking overnight, this has got to be easy with LLBLGen, there has to be like 3 or 4 lines of code that would generically deserialize and save into the database right? Then if I wanted to check for changes I could write some fancy compare process right?

Dirty flags are maintained through a serialization?

Walaa avatar
Walaa
Support Team
Posts: 14993
Joined: 21-Aug-2005
# Posted on: 25-Jan-2010 10:15:17   

I sugget to save into a copy-schema, only without those trigger and stored procedures. Then the windows service would move records from the copy-database into the original one when needed.

TazMizZaz
User
Posts: 37
Joined: 29-Apr-2009
# Posted on: 05-Feb-2010 03:19:41   

I'm picking this one back up after a week of fixing everything else...

A second database is not a viable solution since that would expand the number of databases in existence by 2x. Since this solution is customer installed the number of databases is restricted to that which is actually required by the product. Installing another database is not allowed.

Just for your information I have 13 databases currently running on my test machine, I cannot possibly double that to solve this problem. Doubling would result in the creation of roughly 100-150 additional databases.

Walaa avatar
Walaa
Support Team
Posts: 14993
Joined: 21-Aug-2005
# Posted on: 05-Feb-2010 09:53:23   

Alright, then you may serialize to disk, and later on, pick them up(deserialize) and save to the database.

You can either use a generic entityCollection which might hold entities of different types. Or use a UnitOfWork.

And you can use a Timestamp field with a Concurrency predicate to manage data changes.

TazMizZaz
User
Posts: 37
Joined: 29-Apr-2009
# Posted on: 05-Feb-2010 17:07:47   

Can you give an example of this generic entity collection or UnitOfWork?

My experience with entity collections is that they get declared as thus: EntityCollection<Entity> and then are not very generic for use.

Walaa avatar
Walaa
Support Team
Posts: 14993
Joined: 21-Aug-2005
# Posted on: 08-Feb-2010 09:09:18   

My experience with entity collections is that they get declared as thus: EntityCollection<Entity> and then are not very generic for use.

That's the generic form, so you can use EntityBase as the type when you instantiate it.

Anyway, please check the following docs link for UnitOfWork