LLBLGen v3.5 Vs LLBLGen v4.0

Posts   
 
    
vivek
User
Posts: 45
Joined: 06-Sep-2012
# Posted on: 18-Apr-2013 09:03:57   

Hi Guys,

We have quite a number of licences for LLBLGEN v3.5 and since now LLBLGEN v4.0 is out, we were wondering if you are able to tell how does these two versions when comparing, differ from each other? This would help us make a decision on whether we should spend money to upgrade immediately or can wait if there are no major differences or improvements.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39753
Joined: 17-Aug-2003
# Posted on: 18-Apr-2013 11:47:52   

Well, have you checked the 'what's new' Page? http://www.llblgen.com/pages/whatsnew.aspx It tells you what's new in v4 and then you can decide whether it's something you'd like to have or not simple_smile

Frans Bouma | Lead developer LLBLGen Pro
Kodiak
User
Posts: 92
Joined: 13-Apr-2009
# Posted on: 21-Apr-2013 11:57:36   

I would really recommend upgrading.

We've made the switch across to v4.0 and have seen around 20-40% speed improvements when querying using the LLBLGen Pro Framework.

We've also seen lower memory usage ( ~ 25 MB down from 170 MB pre upgrade).

Obviously these will all depend upon your exact circumstances.

Next on my list to experiment with are DataScopes and Caching.

Posts: 256
Joined: 05-Jul-2010
# Posted on: 23-Apr-2013 14:09:38   

Hi

Just converted as well

before (3.5) 32bit, 342 MB of ram needed , 64bit => 661MB after (4.0) 32bit, 187 MB of ram needed., 64 bit => 330MB

given the overhead of the app of 80 MB, this means that the actual "data" size has been reduced from 262 to 107 MB. or from 500 to 200 (64-bit)

performance of loading

before 32bit, 25 seconds after 32bit, 26 and 24 => avg 24.

So no performance gain, but massive footprint gain.

thanks to the people of llblgenpro

A

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39753
Joined: 17-Aug-2003
# Posted on: 23-Apr-2013 15:20:39   

Alexander, thanks for the figures simple_smile What I find a little curious is that you see no difference in performance, as the way how data is loaded into entities is now drastically different (no entity field objects anymore internally, the row from the datareader is directly added to the entity instead of each value into a field). If you time an entity collection fetch, it should be ~20%-40% faster than v3.5. Could you do that for me please?

Frans Bouma | Lead developer LLBLGen Pro
Posts: 256
Joined: 05-Jul-2010
# Posted on: 23-Apr-2013 15:30:17   

Otis wrote:

Alexander, thanks for the figures simple_smile What I find a little curious is that you see no difference in performance, as the way how data is loaded into entities is now drastically different (no entity field objects anymore internally, the row from the datareader is directly added to the entity instead of each value into a field). If you time an entity collection fetch, it should be ~20%-40% faster than v3.5. Could you do that for me please?

Well, I don't measure the actual "llblgen"-load time. I measure the time it takes from the first load command until the last load command to load objects into my dal. I just use the timestamp from the logfile. (in both cases though)

My dal is build upon llblgenpro, but it is optimized for a certain kind off applications. Those that require live updates on a "limited" set of data.

Since most of the data gets stored inside special collections, i assume that the performance gain of llblgen pro is not "noticed" anymore in my absurdly rough benchmarks. So if llblgen takes 10 of the total time and you increase 20%... I only notice a difference of 1 second or so.

I'll do some more checks though, because I noticed that I was using Fields[i].Value and stuff like that and this causes you to create me a facade field anyway. I tried removing these, but I'm pretty sure, I use some suboptimal constructions as well.

But once again => memory reduction of 250% is amazingly good already and worth every minute I spend converting the stuff.

thanks

a

Posts: 256
Joined: 05-Jul-2010
# Posted on: 23-Apr-2013 15:43:59   

not possible anymore to add attachments here?

Ants profiler results

35% of the time is spend inside FetchEntityCollection. So in total llblgenpro was only responsible for 8 seconds. Out of this 35%, 26% is spend inside the firebird provider.

Bringing it down to 2.5 seconds for you to optimize. Give this a 20% performance gain... and you end up at 0.5 second profit. This might even be 1 or 2...

it doesn't show in the total benchmark.

The memory does though :-)

thanks

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39753
Joined: 17-Aug-2003
# Posted on: 23-Apr-2013 15:48:14   

Alexander wrote:

Otis wrote:

Alexander, thanks for the figures simple_smile What I find a little curious is that you see no difference in performance, as the way how data is loaded into entities is now drastically different (no entity field objects anymore internally, the row from the datareader is directly added to the entity instead of each value into a field). If you time an entity collection fetch, it should be ~20%-40% faster than v3.5. Could you do that for me please?

Well, I don't measure the actual "llblgen"-load time. I measure the time it takes from the first load command until the last load command to load objects into my dal. I just use the timestamp from the logfile. (in both cases though)

My dal is build upon llblgenpro, but it is optimized for a certain kind off applications. Those that require live updates on a "limited" set of data.

Since most of the data gets stored inside special collections, i assume that the performance gain of llblgen pro is not "noticed" anymore in my absurdly rough benchmarks. So if llblgen takes 10 of the total time and you increase 20%... I only notice a difference of 1 second or so.

Ah ok, makes sense. simple_smile I was already worried I tripped up somewhere and accidentally introduced a slowness (even though in our own benchmarks it kills everything except dapper, linq to sql and hand-written code wink ). Add to that the resultset caching (which you can use, regardless whether you store the data elsewhere), and it's flying simple_smile

I'll do some more checks though, because I noticed that I was using Fields[i].Value and stuff like that and this causes you to create me a facade field anyway. I tried removing these, but I'm pretty sure, I use some suboptimal constructions as well.

Yes, doing that will create the facade field and will cause memory to go up a bit. For insert/update queries, the facade fields are created still, but as the objects are very likely to be garbage collected after that it's not a real issue. This optimization candidate will be addressed in a future v4.x version.

But once again => memory reduction of 250% is amazingly good already and worth every minute I spend converting the stuff.

nice to hear! smile . You know I have sought so long for an optimization like this and never found it. The refactorings we did in v3.5 to make the field/fields code sharable among selfservicing & adapter set the first step and one day it hit me: just store the datareader row inside the fields object, and create field objects only if required, and bingo, it was very fast and as no entity field objects are required for storing data anymore, it is much easier for memory simple_smile

Alexander wrote:

not possible anymore to add attachments here?

Nope, general chat was mostly used by spammers, so we disabled it.

Ants profiler results

35% of the time is spend inside FetchEntityCollection. So in total llblgenpro was only responsible for 8 seconds. Out of this 35%, 26% is spend inside the firebird provider.

Bringing it down to 2.5 seconds for you to optimize. Give this a 20% performance gain... and you end up at 0.5 second profit. This might even be 1 or 2... it doesn't show in the total benchmark.

thanks for clearing that up! simple_smile Also a good overview how optimizations for a complete application often have to be applied to places other than you might expect.

Frans Bouma | Lead developer LLBLGen Pro
miloszes
User
Posts: 222
Joined: 03-Apr-2007
# Posted on: 07-May-2013 14:17:29   

I'll add some numbers (3.5 vs 4.0). Adapter. Postgresql.

Fetching entities without any prefetch paths looks nice (10 000 rows): about 23% performance gain.

Fetching entities with prefetch paths (172 979 rows): about 7%.

Fetching entities with prefetch paths (16 222 rows): about 13%.

Fetching entities with prefetch paths (1 801 rows): about 4%.

Memory consumption: about 20% less.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39753
Joined: 17-Aug-2003
# Posted on: 08-May-2013 11:40:07   

Interesting that you didn't find more speed increases with the massive prefetch paths. Apparently the time taken by these actions is spent outside the actual entity materialization pipeline.

Frans Bouma | Lead developer LLBLGen Pro
NMackay
User
Posts: 138
Joined: 31-Oct-2011
# Posted on: 15-Oct-2013 16:22:16   

I can confirm 4.0 is noticeably quicker than 3.5, when you add caching where appropriate there's a pretty noticeable performance hike, especially for entity fetches with a large amout of rows.

3.5 is used in many of our applications and works perfectly but I say it's worth justifying the upgrade to 4.0.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39753
Joined: 17-Aug-2003
# Posted on: 15-Oct-2013 17:26:51   

Thanks for the great feedback, Norman! simple_smile

Frans Bouma | Lead developer LLBLGen Pro