Home
Help
Register
Log in

Search

 
   Active Threads  

You are here: Home > LLBLGen Pro > Bugs & Issues> System.NullReferenceException
 

Pages: 1 2
Bugs & Issues
System.NullReferenceException
Page:1/2 

  Print all messages in this thread  
Poster Message
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 23-Feb-2020 21:54:30.  
Hi,

background: I've been hunting for the following exception with no luck so far, it occurs from time to time. When it happens it basically snowballs and the webapp becomes unresponsive:

Code:

A Task's exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. Object reference not set to an instance of an object.

Exception type    System.NullReferenceException    
Failed method    System.Web.ThreadContext.AssociateWithCurrentThread    
Problem Id    System.NullReferenceException at System.Web.ThreadContext.AssociateWithCurrentThread    
Assembly    System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a

System.AggregateException:
System.NullReferenceException:
at System.Web.ThreadContext.AssociateWithCurrentThread (System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
at System.Web.HttpApplication.OnThreadEnterPrivate (System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
at System.Web.HttpApplication.System.Web.Util.ISyncContext.Enter (System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
at System.Web.Util.SynchronizationHelper.SafeWrapCallback (System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
at System.Threading.Tasks.Task.Execute (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)


Last time it happened I saw that in some places it overlapped with an error that might be related to LLBLGen, can't say for sure, just a hypothesis for now, but maybe you might have some insight:

Code:

A Task's exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. Object reference not set to an instance of an object.

Exception type    System.NullReferenceException
Failed method    SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.AddLambdaExpressionToCache
Problem Id    System.NullReferenceException at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.AddLambdaExpressionToCache
Assembly    SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27


System.AggregateException:
System.NullReferenceException:
at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.AddLambdaExpressionToCache (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.GetCompiledLambdaExpression (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.HandleToValueCall (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.HandleMethodCallExpression (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleExpression (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleMemberAssignment (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleMemberBinding (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleMemberBindingList (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleMemberInitExpression (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.GenericExpressionHandler.HandleExpression (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.ProjectionLambdaTransformer.ConvertToProjectorLambda (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at SD.LLBLGen.Pro.QuerySpec.QuerySpecExtensionMethods.Select (SD.LLBLGen.Pro.ORMSupportClasses, Version=5.6.0.0, Culture=neutral, PublicKeyToken=ca73b74ba4e3ff27)
at XYZ.Data.ProductBaseRepository+<>c__DisplayClass34_0.<GetActiveProductListItemsPart1Async>b__0 (XYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Data\Data Repositories\ProductRepository.csXYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 1726)
at Core.Common.Data.LLBLGenDataRepositoryQuerySpec`2+<>c__DisplayClass11_0`1+<<QueryCollectionViaFactoryWithAdapterAsync>b__0>d.MoveNext (Core.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at Core.Common.Data.LLBLGenDataRepositoryQuerySpec`2+<QueryViaFactoryWithAdapterAsync>d__9`1.MoveNext (Core.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at Core.Common.Data.LLBLGenDataRepositoryQuerySpec`2+<QueryCollectionViaFactoryWithAdapterAsync>d__11`1.MoveNext (Core.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at XYZ.Data.ProductBaseRepository+<GetActiveProductListItemsPart1Async>d__34.MoveNext (XYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Data\Data Repositories\ProductRepository.csXYZ.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 1724)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at XYZ.Business.Managers.ProductManager+<GetActiveProductListItemsAsync>d__83.MoveNext (XYZ.Business.Managers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Business.Managers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Business.Managers\Managers\ProductManager.csXYZ.Business.Managers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 2179)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at XYZ.Web.API.Infrastructure.Services.ApiProductService+<>c__DisplayClass12_0+<<GetCachedWithFavoritesAsync>b__0>d.MoveNext (XYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Web.API\Infrastructure\Services\ApiProductService.csXYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 112)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at XYZ.Common.Utilities.MemoryCacheWrapper+<GetOrAddWithAbsoluteExpirationWithLockAsync>d__6`1.MoveNext (XYZ.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Common\Utilities\MemoryCacheWrapper.cs)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089)
at XYZ.Web.API.Infrastructure.Services.ApiProductService+<GetCachedWithFavoritesAsync>d__12.MoveNext (XYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullXYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: REDACTED\XYZ.Web.API\Infrastructure\Services\ApiProductService.csXYZ.Web.API, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 110)

The method
Code:
GetActiveProductListItemsPart1Async
is called quite frequently and does take a filter. It is a bit complex internally (has groupby, joins, subqueries (one with CorrelatedOver), sorting, filtering, paging) to simply paste in here, but I will first try to test by using different filter values. Quickly examined the call chain: awaits seems to be in place and all are with ConfigureAwait(false). If I'm reading the last exception correctly, null-ref happened when LLBLGen tried to convert and cache the query.
  Top
daelmo
Support Team



Location:
Guatemala City
Joined on:
28-Nov-2005 23:35:24
Posted:
8137 posts
# Posted on: 24-Feb-2020 06:43:21.  
Hi Findev,

It’s quite difficult to figure this out when there is not a reproducible pattern.
What is code when you query this data?
How the code is called? Can you reproduce the error?


David Elizondo
LLBLGen'ing (articles and code snippets) | linkedin | twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 24-Feb-2020 08:56:58.  
daelmo wrote:
Hi Findev,

It’s quite difficult to figure this out when there is not a reproducible pattern.
What is code when you query this data?
How the code is called? Can you reproduce the error?


Unfortunately I don't have a repro as of now...
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 24-Feb-2020 10:13:57.  
Hmm, it's weird an NRE happens at that spot:

GetCompiledLambdaExpression first creates a key from the expression. This should fail if expression is null (which it isn't). Then it checks with that key if a value is present in the cache. This is done like this:

Code:
AssureDelegateCache();
string key = CreateExpressionKey(expression);
toReturn = _delegateCache.GetValue(key) ?? AddLambdaExpressionToCache(key, expression);


where AssureDelegateCache() is a lock guarded check on _delegateCache and if it's not there it will recreate it. So at the spot where _delegateCache is read, it's there. It's also never removed again, it's created only when needed, but when it's created once, it's stays there.

the stacktrace shows the call then went on to AddLambdaExpressionToCache, which is a simple:

Code:

AssureDelegateCache();
var toAdd = expression.Compile();
_delegateCache[key] = toAdd;
return toAdd;


the only way a nullref can be thrown here is if expression is null. However, HandleToValueCall calls GetCompiledLambdaExpression like this:
Code:
var elementForSelect = GetCompiledLambdaExpression(LinqExpression.Lambda(toCall)).DynamicInvoke();

which means it can't pass null, as it creates a Lambda expression there. ('LinqExpression' is an alias for System.Expressions.Expression)

I don't know how you build your queries, or if there are some caching going on in your code with respect to query elements, like you re-use a projection ...
My suspection is a multi-threading related issue with a cached element used in your query which only occurs in a multi-request situation.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 24-Feb-2020 11:35:08.  
Otis wrote:

...
I don't know how you build your queries, or if there are some caching going on in your code with respect to query elements, like you re-use a projection ...
My suspection is a multi-threading related issue with a cached element used in your query which only occurs in a multi-request situation.

Some years ago I do remember running into the case when I was indeed caching the XYZFields.SomeProperty in the static class fields, when caught, I think I reworked all those class fields to methods and properties. In this specific method I do see on local variable being reused. Method is stripped down and doesn't include the filter assembling logic:
Code:
var productId = ProductFields.Id.Source( productTableAlias );

which is then used in the Select:
Code:

return qf.Create().Select( () => new SomeDTO
{
    Id = productId.ToValue<Guid>(),
    ...
        ...
    MaxPrice = qf.Inventory.Select( InventoryFields.PricePerItem.Max() ).Where(
        SomeHelperClass.InventorySubquery.GetInventoryPricingFilterForProductId( productId )
    )
    .ToScalar()
    .ToValue<int?>(),

    MinPrice = qf.Inventory.Select( InventoryFields.PricePerItem.Min() ).Where(
        SomeHelperClass.InventorySubquery.GetInventoryPricingFilterForProductId( productId )
    )
    .ToScalar()
    .ToValue<int?>(),

    HasInventoryItems = qf.Create().Select( Functions.Constant( 1 ) ).Where(
        Functions.Exists(
            qf.Inventory.CorrelatedOver(
                SomeHelperClass.InventorySubquery.GetHasInventoryItemsFilterForProductId( productId )
            )
            .Select( InventoryFields.Id )
        )
    )
    .ToScalar()
    .ForceRowLimit()
    .ToValue<bool?>() ?? false

then in .GroupBy() along with other fields.

Those helper methods are like:
Code:

public static class InventorySubquery
{
    public static Predicate GetInventoryPricingFilterForProductId(IEntityField2 correlatedProductId )
    {
        if (correlatedProductId == null)
        {
            throw new ArgumentNullException( nameof(correlatedProductId) );
        }

        return InventoryFields.ProductId == correlatedProductId & InventoryPricingAndAvailabilityFilter;
    }

    public static Predicate GetHasInventoryItemsFilterForProductId(IEntityField2 correlatedProductId)
    {
        if (correlatedProductId == null)
        {
            throw new ArgumentNullException( nameof(correlatedProductId) );
        }

        return InventoryFields.ProductId == correlatedProductId & InventoryFields.IsDeleted == false;
    }
}

private static PredicateExpression InventoryPricingAndAvailabilityFilter => InventoryFields.IsDeleted == false & InventoryFields.IsVisible == true;


Do you think there's a chance that exception was caused due to reusing the productId + multiple invocations even though productId is a local variable?
Thank you!
  Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 03:32:22.  
Okay, my hypothesis didn't work: it wasn't because local variable was reused.
At the moment, basically I managed to replicate the issue via console app while calling this method with null values in a loop of 10k where each iteration creates 100 tasks and awaits them. Problem happens within 1.5k iterations, usually in hundreds.

I modified the failing LLBLGen method to:

Code:

private Delegate AddLambdaExpressionToCache(string key, LambdaExpression expression)
{
    if (string.IsNullOrWhiteSpace( key ))
    {
        File.WriteAllText( @"d:\Temp\llblgen_issue_key_null.txt", key );
    }

    if (expression == null)
    {
        File.WriteAllText( @"d:\Temp\llblgen_issue_exp_null.txt", key );
    }

    AssureDelegateCache();
    var toAdd = expression.Compile();

    if (_delegateCache == null)
    {
        File.WriteAllText( @"d:\Temp\llblgen_issue_exp_delegateCache_is_null.txt", key );
    }

    _delegateCache[key] = toAdd;
    return toAdd;
}


Which file do you think was created? Regular Smiley Answer: llblgen_issue_exp_delegateCache_is_null.txt
I think this is basically a race condition situation:

Code:

private void AssureDelegateCache()
{
    lock(_delegateCacheLock)
    {
        if((_delegateCache != null) && (_delegateCache.Count >= CacheMaxEntriesPerThread))
        {
            _delegateCache = null;
        }
        if(_delegateCache == null)
        {
            _delegateCache = new ConcurrentDictionary<string, Delegate>(StringComparer.OrdinalIgnoreCase);
        }
    }
}

A thread that exits the AssureDelegateCache and then tries to set the value in the dictionary might run into a null-ref if some other thread manages to set that dictionary variable to null in the first if statement.

For now changed the _delegateCache = null to _delegateCache.Clear() to test, over 3.4k iterations passed, will leave the PC and check in several hours.
Btw, it seems like _delegateCache is set to null when cache is full, given I'm invoking the same method all over again I was expecting the cache to contain only single entry, however I recalled that method has DateTime.Now as an additional predicate. Is my understanding of caching correct that basically parameters are also included, i.e. if 500 users will fetch their ids - there will be 500 compilations and entries in the cache?
Enough troubleshooting for now, need to get some sleep - 4:30am Regular Smiley

UPDATE: 10k iterations passed


  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 09:40:04.  
Whoa that's indeed a pretty bad piece of code from our part! Embarrassed thanks for testing! You're absolutely right: the _delegateCache=null; shouldn't happen as readers don't use a lock. We'll fix this asap in the runtime.

Every expression that's compiled is cached with a key created from the expression tree. So executing the same query on multiple threads should result in having just 1 entry in the cache, as the expressions lead to the same key. It's the compilation of the lambdas that's slow so we cache them.

We'll check our key generation code as well as it might be we have made a small mistake somewhere so using different thread contexts might lead to different keys for the same expression.

I'll get back to you a.s.a.p.
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 11:18:56.  
We can't reproduce the cache size increase, even with multiple threads running a loop of 10000 iterations which each start 100 tasks and awaits them in parallel using Task.WhenAll(). We did add a datetime constant in a predicate as you do too. Cache stays at 19 elements across all threads so we never hit the reset to null. Will investigate further why your cache grew and you hit the limit so quickly (and thus ran into the issue)

Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 11:50:44.  
With a Function.Constant(1) we can reproduce the growth of the cache. Looking into that (it still behaves as normal with our setup, resets are OK, but it might be we need a specific setup to hit the race condition.
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 12:09:30.  
The Function.Constant(1) indeed results in a ConstantExpression, wrapping a ConvertExpression containing the int 1 as the value. We take the hashcode of the constant and embed it into the key, assuming constants have the same hashcode, but if it's wrapped in a ConvertExpression, the hashcode we're using is the one from the ConvertExpression, not the one from the actual constant.

So that's bug 1. Bug 2 is the race condition.

(edit) the main issue is the queryfactory instance. That's an embedded closure in the lambda, which always results in a different hashcode, causing the projection to result in a different key and thus it's recompiled every time and also means it's cached every time...


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 12:22:13.  
Otis wrote:
The Function.Constant(1) indeed results in a ConstantExpression, wrapping a ConvertExpression containing the int 1 as the value. We take the hashcode of the constant and embed it into the key, assuming constants have the same hashcode, but if it's wrapped in a ConvertExpression, the hashcode we're using is the one from the ConvertExpression, not the one from the actual constant.

So that's bug 1. Bug 2 is the race condition.


Currently I'm experimenting with:

Code:

ThreadPool.SetMinThreads( 100, 1000 );

for (var i = 0; i < 100; i++)
{
    var tasks = Enumerable.Range(0, 1).Select
        (x =>
         Task.Run
             (
             () =>
             {
                 var id = Guid.NewGuid();
                 //return userService.GetUserLightAsync(id); // 7 cache items
                 //return userService.GetAsync<User>(id); // 0 cache items
                 return productService.GetActiveEventListItemsAsync( ); // 2600 cache items
             }));

    
    await Task.WhenAll(tasks).ConfigureAwait(false);
}


That should be only 1 async task per iteration, for console output see the comments above

Console output is done:
Code:

AssureDelegateCache();
var toAdd = expression.Compile();
Console.WriteLine( $"Cache size: {_delegateCache.Count}..." );
_delegateCache[key] = toAdd;
return toAdd;


Right now I commenting out parts of productService.GetActiveEventListItemsAsync(). Will post shortly.
PS: my dev db is on a different LAN PC, not sure if that has any effect.
PS 2: currently I'm using my temp fix: _delegateCache.Clear()
  Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 12:54:46.  
Further findings, fetching 3 properties:

Code:

return await QuerySpecHelper.QueryCollectionViaFactoryWithAdapterAsync( qf =>
{
    var productId = ProductFields.Id.Source( productTableAlias );
    var productType = ProductFields.ProductType.Source( productTableAlias );
    var companyName = CompanyFields.Name.As( companyNameColumnAlias ).Source( companyTableAlias );

    return qf.Create().Select( () => new ActiveProductListItemPart1
    {
        // 12 cache entries
        //Id = ProductFields.Id.Source(productTableAlias).ToValue<Guid>(),
        //ProductType = ProductFields.ProductType.Source(productTableAlias).ToValue<ProductType>(),
        //CompanyName = CompanyFields.Name.As(companyNameColumnAlias).Source(companyTableAlias).ToValue<string>(),

        // 309 cache entries
        Id = productId.ToValue<Guid>(),
        ProductType = productType.ToValue<ProductType>(),
        CompanyName = companyName.ToValue<string>(),
        
        ...
        10+ other properties (all commented out)
        
        //then: From, Where, GroupBy, OrderBy, Page


apparently if I declare these properties outside of the Select() (even above the return statement) it results in a massive amount of cache entries: I tried commenting out the CompanyName property materialization and got 11 and 209. So in the former case each new property seems to add just 1, but the latter adds 100. Given I have the iteration which is 100 - has it generated new cache entry for each iteration...? I was declaring these properties outside as each is quite long due to source/alias and this allows to have more readable and concise code...


  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 13:37:23.  
Thanks for the into Regular Smiley Yes the local variables are embedded in the lambda expression by the compiler (resulting in value(<>c__DisplayClass ... )) and always lead to a different hashcode causing the key to be different every time.

It's basically this: https://petemontgomery.wordpress.com/2008/08/07/caching-the-results-of-linq-queries/

(we use a slightly different approach at places, but the root cause is the same thing). The main problem is that you can't really avoid recompiling the lambda if it has local closures: if you compile the lambda, the first closure is the one used for all other usages as well, which means for the queryfactory all situations where the compiled lambda is re-used will use the first instance of the queryfactory. This isn't OK as it's not a stateless object.

The idea is to reuse the projector lambda for the same query so to avoid recompiling the expression over and over again (as that's not really fast.) however in this case that's not doable. At least I don't see how. We'll think abit about how to avoid all this and still re-use compiled lambdas and not have cache trashing.

We overlooked this completely, and for that my apologies. In our benchmarks we didn't run into this as they don't use local elements in the lambdas so it didn't show up there.
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 14:37:12.  
We think the solution is that if the key constructor finds a local object along the way during the visit of the tree, the lambda isn't cached, so it's always compiled. This would still leave expressions at the local level (e.g. Field = SomeEntityFields.SomeField.ToValue<string>() ) get compiled but it won't trash the cache anymore with lambdas it has to recompile anyway.

The projector lambda used to materialize the result is always compiled as there are no local elements left in that lambda.

We hope to have a fixed runtime later today Regular Smiley


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 14:47:00.  
Basically I need to rework all my Select(() => new ...) statements not to have any references to local variables, heh Regular Smiley Getting rid of those local properties will make the queries longer and a bit less maintainable, oh well. Checked another option:
Code:

return qf.Create()
         .Select( productId, productType, companyName )
         .From( queryHelper.GetJoinPart( qf ) )
         .WithProjector
             (
             r =>
                 new ActiveProductListItemPart1
                 {
                     Id         = (Guid) r[ 0 ],
                     ProductType = (ProductType) r[ 1 ],
                     CompanyName = (string) r[ 2 ]
                 } );


Results in less cache entries (8 vs 12), but a bit uglier, however, can use local variables and, imho, better than Select<ActiveProductListItemPart1> as it's much less mapping error-prone.

Back to cache bloating: then I added one more property:
Code:

return await QuerySpecHelper.QueryCollectionViaFactoryWithAdapterAsync( qf =>
{
return qf.Create().Select( () => new ActiveProductListItemPart1
{
    // 12 cache entries
    Id = ProductFields.Id.Source(productTableAlias).ToValue<Guid>(),
    ProductType = ProductFields.ProductType.Source(productTableAlias).ToValue<ProductType>(),
    CompanyName = CompanyFields.Name.As(companyNameColumnAlias).Source(companyTableAlias).ToValue<string>(),
    MaxPrice = qf.Inventory
                 .Select(InventoryFields.PricePerItem.Max())
                 .ToScalar()
                 .ToValue<int?>(),


Now 12 cached items became 112 (same 100 iterations). Tried replacing the qf.Inventory with new QueryFactory().Inventory - still 112, however, the following version of the query:
Code:

return await QuerySpecHelper.QueryCollectionViaFactoryWithAdapterAsync( qf =>
{                
var maxPrice = qf.Inventory.Select(InventoryFields.PricePerItem.Max()).ToScalar();

return qf.Create()
         .Select(productId, productType, companyName, maxPrice)
         .From(queryHelper.GetJoinPart(qf))
         .GroupBy(productId, productType, companyName)
         .WithProjector
             (
             r =>
                 new ActiveProductListItemPart1
                 {
                     Id = (Guid)r[0],
                     ProductType = (ProductType)r[1],
                     CompanyName = (string)r[2],
                     MaxPrice = (int?)r[3]
                 });

results in 8. It was also 8 before I added the MaxPrice property.
  Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 14:57:06.  
Otis wrote:
We think the solution is that if the key constructor finds a local object along the way during the visit of the tree, the lambda isn't cached, so it's always compiled. This would still leave expressions at the local level (e.g. Field = SomeEntityFields.SomeField.ToValue<string>() ) get compiled but it won't trash the cache anymore with lambdas it has to recompile anyway.

The projector lambda used to materialize the result is always compiled as there are no local elements left in that lambda.

We hope to have a fixed runtime later today Regular Smiley


So with this change I can keep on using local variables, but they won't pollute the cache, however, will be compiled anyway? How much performance penalty do you think that is? I'm somehow now leaning towards ditching expression based .Select() and move to .WithProjector()... hm

Btw, would it make sense to have some diagnostics (trace?) in the run-time that could help debugging issues like that. Cache size would have been nice, if value would be growing and then resetted we'd know cache is being polluted. Also for lambda compilations: allowing cheaper ones to stay as it, but more expensive ones to be reworked to .WithProjector(). To my understanding the .WithProjector() approach is the best so far


  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 15:38:48.  
Findev wrote:
Otis wrote:
We think the solution is that if the key constructor finds a local object along the way during the visit of the tree, the lambda isn't cached, so it's always compiled. This would still leave expressions at the local level (e.g. Field = SomeEntityFields.SomeField.ToValue<string>() ) get compiled but it won't trash the cache anymore with lambdas it has to recompile anyway.

The projector lambda used to materialize the result is always compiled as there are no local elements left in that lambda.

We hope to have a fixed runtime later today Regular Smiley


So with this change I can keep on using local variables, but they won't pollute the cache, however, will be compiled anyway? How much performance penalty do you think that is? I'm somehow now leaning towards ditching expression based .Select() and move to .WithProjector()... hm

Not that much, the main performance bottleneck with lambdas is when the data is materialized and that lambda is always compiled. (as all local variables enclosed in the lambda have been evaluated and replaced). Compiling the lambda is of course a bit slower than having no compilation but it's not the end of the world. I don't think you'll see any performance degradation as the time the query takes to get executed and the data gets materialized is larger than the time it takes to evaluate the projection.

You'll also only have a lambda get compiled every time when you use the e.g. query factory inside the projection (as it's an instance that's enclosed). A straight forward projection with only ToValue<>() calls doesn't use a local variable and thus will be compiled once. Your approach with local variables is going to get recompiled every time but you already are doing that behind the scenes (as the cached lambda is never reused) so it'll give the same performance as you have now.

Quote:

Btw, would it make sense to have some diagnostics (trace?) in the run-time that could help debugging issues like that. Cache size would have been nice, if value would be growing and then resetted we'd know cache is being polluted. Also for lambda compilations: allowing cheaper ones to stay as it, but more expensive ones to be reworked to .WithProjector(). To my understanding the .WithProjector() approach is the best so far

that's a good idea, the cache pollution one. When the cache is cleared because it's full it's good to notify that so people get an idea what's going on (like query plan recompilation on sql server).

Regarding WithProjector, what the Select( ()=>new ... ) does is effectively reworking it to a WithProjector like projection, so you can use the WithProjector approach to avoid any closures, but it's a bit less readable.

We'll have a fix read in an hour
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 16:23:35.  
Let me try to sum it up.

1) With the current run-time and I have and the one with the upcoming fix:
.WithProjector() is the safest and sort of the best(?) for caching: no closures, less cache entries=less compilations which is good, but less readable.

2) .Select(expression): with current run-time and I have and the one with the upcoming fix:
I have to get rid of ANY local variables inside the projection expression. Once this is done, cache will be properly reused as keys will be matching.
Minor downside: code will be less readable because .As(), .Source() etc will have to be spread out to corresponding properties.

3) .Select(expression) + using query factory for subqueries: with current run-time and I have and the one with the upcoming fix:
will always lead to generation of new cache entry, thus queries like that should be reworked to use .WithProjector()

4) I will still be able to see how cache is growing by having the Console.WriteLine in the modified sources as I have now to assist my refactoring
Downside: have to tinker with public + token and skip sn validation, but that's already automated Regular Smiley

5) Later there will be a better way to get that information from the run-time. Maybe even query directly (static variable?), that might be more helpful and I can then try to check that value, say, while looping the test(s) and catch if cache is growing which would mean that some queries needs to be fixed (local variables) so that cache key is matching.

Replying per point is very much appreciated Regular Smiley

P.S.: please don't forget to put up the sources for the fixed run-time


  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 25-Feb-2020 17:07:42.  
Hotfix is now available (also on nuget)

Findev wrote:
Let me try to sum it up.

1) With the current run-time and I have and the one with the upcoming fix:
.WithProjector() is the safest and sort of the best(?) for caching: no closures, less cache entries, but less readable.

It's indeed the safest wrt to recompiles, as there is just 1 compile: of the lambda you provide, and only once. As long as there's no variable enclosed in the lambda (and normally you don't have any variables enclosed with this approach) you won't have this lambda getting recompiled.

Quote:

2) .Select(expression): with current run-time and I have and the one with the upcoming fix:
I have to get rid of ANY local variables inside the projection expression. Once this is done, cache will be properly reused as keys will be matching.
Minor downside: code will be less readable because .As(), .Source() etc will have to be spread out to corresponding properties.

Correct. But don't worry too much about recompiles, as the key generator now properly detects if the expression can be cached and if not, it will recompile it and not cache it, so you won't see cache trashing anymore (at least that's what it should do, we couldn't find any cache trashing anymore).

Any static variables are not local enclosures btw. If you have a static field / property / method, it's seen as something that always returns the same thing so the object returned will always result in the same hashcode for the key so using static variables won't lead to cache trashing nor recompiles. Of course the elements have to be reusable. If you wrap your variables in static methods the compiled lambda will simply call the static method when it's called so the compiled form of the lambda is cachable and your cache shouldn't get full as the key is always the same.

Quote:
3) .Select(expression) + using query factory for subqueries: with current run-time and I have and the one with the upcoming fix:
will always lead to generation of new cache entry, thus queries like that should be reworked to use .WithProjector()

Not needed. The new runtime will detect this and avoid caching the lambda you provide, so it will recompile it again every time the query is created. This is done with the old runtime as well. The lambda used for projection and created from this expression in the end *is* compiled once. So you don't need to rework these: you won't lose performance over what you have now and it won't trash the cache.

Quote:

4) I will still be able to see how cache is growing by having the Console.WriteLine in the modified sources as I have now to assist my refactoring
Downside: have to tinker with public + token and skip sn validation, but that's already automated Regular Smiley


If you want to pull the cache, and observe the count, use:
Code:

var cache = typeof(ProjectionLambdaTransformer).GetField("_delegateCache", BindingFlags.Static | BindingFlags.NonPublic);
var count = ((IDictionary)cache.GetValue(null)).Count;


So you don't need to modify the sources Regular Smiley

Quote:

5) Later there will be a better way to get that information from the run-time. Maybe even query directly (static variable?), that might be more helpful and I can then try to check that value, say, while looping the test(s) and catch if cache is growing which would mean that some queries needs to be fixed (local variables) so that cache key is matching.

It should now be the case that no query should pollute the cache anymore with a lot of expressions which are never re-used as several checks are now in place to verify if the expression contains elements which are local / enclosed: if so, the key is void and the expression isn't cached.

We'll later on add a trace output.
Quote:

P.S.: please don't forget to put up the sources for the fixed run-time

Done. Regular Smiley
Please let us know if this still causes problems
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 17:19:53.  
Take away: don't change anything, but test with new run-time. If I want the safest and best performance go with .WithProjector(). Evaluate if a bit worse readability costs are justified to gained benefits.

Alright, will make tea, pull new sources, nugets and start testing then. Thank you! Regular Smiley


  Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 25-Feb-2020 19:24:28.  
Modified new sources the same way:
Code:

AssureDelegateCache();
var toAdd = expression.Compile();
Console.WriteLine( $"Cache size: {_delegateCache.Count}..." );
_delegateCache[key] = toAdd;
return toAdd;


When following is executed:
Code:

var tasks = Enumerable.Range(0, 1).Select
(x =>
Task.Run
     (
     () =>
     {
         var id = Guid.NewGuid();
         return userService.GetUserLightAsync(id);
     }));

I got 8 messages about the cache size (0, 1, 2, 3, 4, 5, 6, 7)

when Enumerable.Range(0, 1) is changed to Enumerable.Range(0, 100)
then I'm seeing duplicates in the output and total number of lines range from ~160-~<210. Is that an expected behavior?

UPDATE: of course this is nowhere near proper benchmarking, but 300 iterations with 100 tasks each executed 3 times each one after another:
WithProject(): 12:11, 14:54, 11:51
Select + expression: 11:27, 11:36, 11:30
seems like lambda is a bit faster...
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 26-Feb-2020 10:55:10.  
Findev wrote:
Modified new sources the same way:
Code:

AssureDelegateCache();
var toAdd = expression.Compile();
Console.WriteLine( $"Cache size: {_delegateCache.Count}..." );
_delegateCache[key] = toAdd;
return toAdd;


When following is executed:
Code:

var tasks = Enumerable.Range(0, 1).Select
(x =>
Task.Run
     (
     () =>
     {
         var id = Guid.NewGuid();
         return userService.GetUserLightAsync(id);
     }));

I got 8 messages about the cache size (0, 1, 2, 3, 4, 5, 6, 7)

when Enumerable.Range(0, 1) is changed to Enumerable.Range(0, 100)
then I'm seeing duplicates in the output and total number of lines range from ~160-~<210. Is that an expected behavior?

Hmm. the GetUserLightAsync method, what does it do exactly?
The duplicate lines might be because it runs on multiple threads? (so the chance is higher with a large range that 2 or more threads reach the add-to-cache line at roughly the same time...

That the cache increases with that code suggests there's still a situation where the key generator doesn't spot a local element (which can appear in various forms, hence it's not entirely simple to detect it... )

Quote:

UPDATE: of course this is nowhere near proper benchmarking, but 300 iterations with 100 tasks each executed 3 times each one after another:
WithProject(): 12:11, 14:54, 11:51
Select + expression: 11:27, 11:36, 11:30
seems like lambda is a bit faster...

Interesting Regular Smiley the difference might be coming from the boxing that's occurring in the withprojector lambda (as it uses an object array) which is slower on .net core than on .netfx (compared to typed fetches). other than that I can't explain the difference.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Findev
User



Location:

Joined on:
08-Dec-2014 23:21:38
Posted:
100 posts
# Posted on: 26-Feb-2020 11:11:18.  
Yes, lots duplicate lines appear when 100 tasks per iteration are executed, then it stops and iterations continue without caching anymore. That's a service method which basically proxies to the repository:

Code:

public async Task<UserLight> GetUserLightAsync(string email)
{
    if (string.IsNullOrWhiteSpace(email))
    {
        throw new ArgumentNullException(nameof(email));
    }

    var result = await GetUsersLightInternalAsync( UserFields.Email == email ).ConfigureAwait( false );

    return result.SingleOrDefault();
}

private async Task<IList<UserLight>> GetUsersLightInternalAsync( IPredicate filter, ISortClause[] sortClauses = null, PagingParameters? paging = null )
{
    if (filter == null)
    {
        throw new ArgumentNullException( nameof(filter) );
    }

    // ReSharper disable once AsyncConverter.AsyncAwaitMayBeElidedHighlighting
    return await QuerySpecHelper.QueryCollectionViaFactoryWithAdapterAsync(
            qf =>
            {
                var q = GetUserLightQueryBody( filter, qf );

                if (sortClauses?.Any() == true)
                {
                    q = q.OrderBy( sortClauses );
                }

                if (paging != null)
                {
                    q = q.Page( paging.Value.PageNumber, paging.Value.PageSize );
                }

                return q;
            } )
        .ConfigureAwait( false );
}

private static DynamicQuery<UserLight> GetUserLightQueryBody( IPredicate filter, QueryFactory qf )
{
    return qf.Create<T>()
             .Where( filter )
             .Select(
                 () =>
                     new UserLight
                     {
                         Id = UserFields.Id.ToValue<Guid>(),
                         Email = UserFields.Email.ToValue<string>(),
                         FirstName = UserFields.FirstName.ToValue<string>(),
                         LastName = UserFields.LastName.ToValue<string>(),
                         Culture = UserFields.Culture.ToValue<string>(),
                         TimeZoneId = UserFields.TimeZoneId.ToValue<string>(),
                         DateCreated = UserFields.DateCreated.ToValue<DateTime>()
                     } );
}


As for yesterday's "benchmark" I will test with local variables, noticed that lambda was:
Code:

Id = ProductFields.Id.Source(productTableAlias).ToValue<Guid>(),
ProductType = ProductFields.ProductType.Source(productTableAlias).ToValue<ProductType>(),
CompanyName = CompanyFields.Name.As(companyNameColumnAlias).Source(companyTableAlias).ToValue<string>(),
MaxPrice = qf.Inventory
                 .Select(InventoryFields.PricePerItem.Max())
                 .ToScalar()
                 .ToValue<int?>(),

will see if there's difference when it is:
Code:

Id = productId.ToValue<Guid>(),
ProductType = productType.ToValue<ProductType>(),
CompanyName = companyName.ToValue<string>(),
MaxPrice = qf.Inventory
             .Select(InventoryFields.PricePerItem.Max())
             .ToScalar()
             .ToValue<int?>(),


Thank you!

UPDATE: my timing "benchmarks" can be ignored, they are not consistent, so I just take your word about performance and keep on using expressions in my selects Regular Smiley
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 26-Feb-2020 11:54:02.  
That it stops suggests the same enclosed instance for a variable is re-used for a particular thread, so the key is the same as a cached variant, but it still caches a lambda with a local variable somewhere... hmm.

Will try to rebuild your method and see if I can reproduce it.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
38047 posts
# Posted on: 26-Feb-2020 13:02:22.  
Hmm, I don't see the cache trashing anymore when I do this:

Code:

public class Service
{
    public async Task<IList<OrderDTO>> GetOrders(IPredicate filter)
    {
        var qf = new QueryFactory();
        var q = GetOrderDTO(filter, qf);
        using(var adapter = new DataAccessAdapter())
        {
            var results = await adapter.FetchQueryAsync(q).ConfigureAwait(false);
            return results;
        }
    }
    

    public DynamicQuery<OrderDTO> GetOrderDTO(IPredicate filter, QueryFactory qf)
    {
        return qf.Create()
                 .Where(filter)
                 .Select(() => new OrderDTO()
                             {
                                 OrderId = OrderFields.OrderId.ToValue<int>(),
                                 CustomerId = OrderFields.CustomerId.ToValue<string>(),
                                 EmployeeId = OrderFields.EmployeeId.ToValue<int?>(),
                                 OrderDate = OrderFields.OrderDate.ToValue<DateTime?>(),
                                 Freight = OrderFields.Freight.ToValue<Decimal?>(),
                                 ShipCity = OrderFields.ShipCity.ToValue<string>(),
                                 ShipVia = OrderFields.ShipVia.ToValue<int?>()
                             });
    }        
}


public class OrderDTO
{
    public int OrderId { get; set; }
    public string CustomerId { get; set; }
    public int? EmployeeId { get; set; }
    public DateTime? OrderDate { get; set; }
    public Decimal? Freight { get; set; }
    public string ShipCity { get; set; }
    public int? ShipVia { get; set; }
}


and test:

Code:

public static void Main(string[] args)
{
    //InterceptorCore.Initialize("Runner");
    try
    {
        FetchTL();
        for(int i = 0; i < Environment.ProcessorCount/2; i++)
        {
            Console.WriteLine("Starting thread {0}", i);
            var t = new Thread(Program.DoTest);
            t.Start(i);
        }
    }
    catch(Exception ex)
    {
        DisplayException(ex);
    }
}

...
private static void DoTest(object param)
{
    Console.WriteLine("Start of test... Thread: {0}", param);
    var service = new Service();
    for(int i = 0; i < 1000; i++)
    {
        var tasks = Enumerable.Range(0, 100)
                             .Select(x => Task.Run(() =>
                                                    {
                                                        var c = "CHOPS";
                                                        return service.GetOrders(OrderFields.CustomerId.Equal(c));
                                                    }));
        var cache = typeof(ProjectionLambdaTransformer).GetField("_delegateCache", BindingFlags.Static | BindingFlags.NonPublic);
        Console.WriteLine("Iteration {0} done. Cachesize: {1}", i, ((IDictionary)cache.GetValue(null)).Count);
    }

    Console.WriteLine("End of test");
}

The cache size is consistent at a low number. The duplicates are fine: they're only at the beginning, when a thread sees the expression to cache isn't there yet, so it will cache it, however another thread just drew the same conclusion and will cache it too. that's ok, as the key is the same it will simply replace the previous one there. This is chosen over a lock as it's not important if this happens in the beginning (the compiled lambda is the same) and a lock will hurt the performance of everything over time. After a short period of time you don't see any messages anymore as no expression is added to the cache anymore: they're only re-used from the cache. This is what should be happening.
Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Pages: 1 2  


Powered by HnD ©2002-2007 Solutions Design
HnD uses LLBLGen Pro

Version: 2.1.12172008 Final.