- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Adapter.Save and Self Referencing Table Wierdness
Frans,
I've got a really wierd one that I spend some time trying to reproduce. I now have a test DB, test projects and NUnit tests to reproduce the problem.
Basically, I have a table "Folder" which references itself via a ParentFolderID. The table also references another table "FolderType" in a ManyToOne relation.
In the test I construct the tree 3 levels deep and save recursively.
The bug is that, if I use the same FolderTypeEntity instance in the for the root, children and grandchildren the ParentFolderID is not set adn remains NULL.
On the other hand if I use 2 different FolderTypeEntity instances one for the root and the other for both the children and grandchildren... the ParentFolderIDs are set correctly.
I can zip up the folder with the VS projects etc if you want to reproduce it fully there... just let me know and I'll email it to you.
Marcus
using System;
using NUnit.Framework;
using SD.LLBLGen.Pro.ORMSupportClasses;
using Test;
using Test.DatabaseSpecific;
using Test.EntityClasses;
using Test.FactoryClasses;
using Test.HelperClasses;
namespace NUnitTest
{
/// <summary>
/// Summary description for Class1.
/// </summary>
[TestFixture]
public class TestSuite
{
[Test]
public void Succeeds()
{
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
FolderTypeEntity folderTypeEntity2 = CreateFolderType();
DoTest(folderTypeEntity1, folderTypeEntity2);
}
[Test]
public void Fails()
{
FolderTypeEntity folderTypeEntity = CreateFolderType();
DoTest(folderTypeEntity, folderTypeEntity);
}
private static void DoTest(FolderTypeEntity folderTypeEntity1, FolderTypeEntity folderTypeEntity2)
{
FolderEntity folderEntity = CreateFolder(null, folderTypeEntity1);
string name = folderEntity.Name;
for (int i = 0; i < 3; i++)
{
FolderEntity childFolderEntity = CreateFolder(folderEntity, folderTypeEntity2);
folderEntity.ChildFolderCollection.Add(childFolderEntity);
for (int j = 0; j < 3; j++)
{
FolderEntity grandchildFolderEntity = CreateFolder(folderEntity, folderTypeEntity2);
childFolderEntity.ChildFolderCollection.Add(grandchildFolderEntity);
}
}
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderEntity);
}
FolderEntity fetchedFolder = new FolderEntity();
fetchedFolder.Name = name;
IPrefetchPath2 prefetch = new PrefetchPath2((int)EntityType.FolderEntity);
prefetch.Add(FolderEntity.PrefetchPathChildFolderCollection).SubPath.Add(FolderEntity.PrefetchPathChildFolderCollection);
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.FetchEntityUsingUniqueConstraint(fetchedFolder, fetchedFolder.ConstructFilterForUCName(), prefetch);
}
Assert.AreEqual(3, fetchedFolder.ChildFolderCollection.Count);
foreach (FolderEntity childFolder in fetchedFolder.ChildFolderCollection)
{
Assert.AreEqual(3, childFolder.ChildFolderCollection.Count);
}
}
private static FolderEntity CreateFolder(FolderEntity parentFolderEntity, FolderTypeEntity folderTypeEntity)
{
FolderEntity folderEntity = new FolderEntity();
folderEntity.ParentFolder = parentFolderEntity;
folderEntity.FolderType = folderTypeEntity;
folderEntity.Name = Guid.NewGuid().ToString();
return folderEntity;
}
private static FolderTypeEntity CreateFolderType()
{
FolderTypeEntity folderTypeEntity = new FolderTypeEntity();
folderTypeEntity.Name = Guid.NewGuid().ToString();
return folderTypeEntity;
}
}
}
CREATE TABLE [dbo].[Folder] (
[FolderID] [int] IDENTITY (1, 1) NOT NULL ,
[FolderTypeID] [int] NOT NULL ,
[ParentFolderID] [int] NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[FolderType] (
[FolderTypeID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Folder] ADD
CONSTRAINT [PK_Folder] PRIMARY KEY CLUSTERED
(
[FolderID]
) ON [PRIMARY] ,
CONSTRAINT [UC_Folder_Name] UNIQUE NONCLUSTERED
(
[Name]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FolderType] ADD
CONSTRAINT [PK_FolderType] PRIMARY KEY CLUSTERED
(
[FolderTypeID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Folder] ADD
CONSTRAINT [FK_Folder_Folder] FOREIGN KEY
(
[ParentFolderID]
) REFERENCES [dbo].[Folder] (
[FolderID]
),
CONSTRAINT [FK_Folder_FolderType] FOREIGN KEY
(
[FolderTypeID]
) REFERENCES [dbo].[FolderType] (
[FolderTypeID]
)
GO
With the code you posted it indeed doesn't work here. Looking into it. The save returns true though.. odd...
(I'm changing the code abit, because of the guid's it's very hard to track which id is which)
btw: you made a copy/paste error
FolderEntity grandchildFolderEntity = CreateFolder(folderEntity, folderTypeEntity2);
childFolderEntity.ChildFolderCollection.Add(grandchildFolderEntity);
shouldn't that be childFolderEntity instead of folderEntity, passed to CreateFolder ?
Furthermore, you add the parent already in the CreateFolder, you then ALSO add the created folder to the child collection of the parent, which you shouldn't do. I get weird errors now though: if I change the childname to "child" + i and grandchild.name to "grandchild" + j I get duplicate names in the table while there is nothing in the table. :? Of course that was my own stupidity.
(edit) Ok, here's the problem. First it will try to save the rootfolder. As Rootfolder depends on FolderType, it will first try to save FolderType. This goes ok. After foldertype has been saved, the save routine checks if there are depending entities on foldertype. There are. It grabs that collection and calls SaveCollection.
In SaveCollection, it first runs into Root folder. As that one is already in progress, it is skipped. Next is Child 0. SaveEntity is called with Child 0. Child 0 checks if a dependent entity is already in progress. This is done to prevent loops. Root folder is a dependent entity of child 0 and is in progress, so the save routine is exited. . Next in the dependent entities collection is grandchild 0 of child 0 (as that's also a depending entity of foldertype). SaveEntity() sees that grandchild 0 of child 0 depends on child 0, so it calls saveentity for child 0. As that routine exists because its dependent entity is already in progress, it exits (with true, otherwise save would exit with a failure). grandchild 0 of child 0 then proceeds with saving itself, of course not correct as child 0 hasn't been saved correctly.
This is due to the fact that there are multiple paths in the object graph (which is saved) from a startpoint to an end point: folder type -> child 0 and foldertype -> grandchild 0 of child 0 -> child 0 (and further to root folder , which is also pointer to by foldertype).
At the time grandchild 0 of child 0 calls saveentity with child 0, it can't know it has to save it this time, as there is already a save action in progress (of the root folder) which will save child 0. This would require graph walking for shortest path etc. which is extremely intensive. Also 'just proceed and save it' won't work, because child 0 would then re-start a save of Rootfolder, because it is depending on that one, but rootfolder is already and progress ** and waits for foldertype, an entity it depends on, to finish**, which is a loop and these don't work.
The only way to solve this is that the save action of grandchild 0 of child 0 is not performed because an entity it depends on (child 0) is already waiting to be saved, which requires graph knowledge and for a small graph this is doable, but for a large graph (and I've seen large wicked graphs going bezerk in older versions of the routine, which is why all these checks etc. are build in ) this is undoable.
Another way to solve this is for now to start a transaction on the adapter, first save all the foldertypes and then all the folders.
Of if you have another solution, I'll be all ears
Otis wrote:
btw: you made a copy/paste error
![]()
Well spotted! But it still fails the test...
Otis wrote:
Furthermore, you add the parent already in the CreateFolder, you then ALSO add the created folder to the child collection of the parent, which you shouldn't do.
mmm... but I'm saving the root (folderEntity) so the recurse should only recurse its path down the child line. I guess the child will in turn save and it will try and save the parent again. I think this is a small design issue with LLBLGen as I believe the two amount to the same thing and I feel it should be legal...
If I change CreateFolder to ignore the ParentFolder by setting it = null (see below) I get the same failure results, so I don't think this is the cause of the problem.
private static FolderEntity CreateFolder(FolderEntity parentFolderEntity, FolderTypeEntity folderTypeEntity)
{
FolderEntity folderEntity = new FolderEntity();
folderEntity.ParentFolder = null;
folderEntity.FolderType = folderTypeEntity;
folderEntity.Name = Guid.NewGuid().ToString();
return folderEntity;
}
Let me tell you it took a while to track this one down!
I've edited my post with an explanation of the error.
re-reading it, I could create yet another hashtable and add all waiting entities to that hashtable which is then checked, so grandchild 0 of child 0 will not be saved because a depending entity is waiting, but I think this is not good, as a 3 layer hierarchy (grand grand child is saved and child 0 and grandchild 0/0 are not yet saved) it doesn't work as it would never abandone the save as it doesn't see child 0 as a related entity.
GREAT explaination... I'm still trying to digest it...
Otis wrote:
Another way to solve this is for now to start a transaction on the adapter, first save all the foldertypes and then all the folders.
Im not sure that will work either... I changed the following to save the FolderType beforehand and re-ran the test. If still failed.
[Test]
public void Fails()
{
FolderTypeEntity folderTypeEntity = CreateFolderType();
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderTypeEntity, true);
}
DoTest(folderTypeEntity, folderTypeEntity);
}
I'll have a re-read of the problem to understand it better.
Marcus
Marcus wrote:
GREAT explaination... I'm still trying to digest it...
![]()
Otis wrote:
Another way to solve this is for now to start a transaction on the adapter, first save all the foldertypes and then all the folders.
Im not sure that will work either... I changed the following to save the FolderType beforehand and re-ran the test. If still failed.
![]()
[Test] public void Fails() { FolderTypeEntity folderTypeEntity = CreateFolderType(); using (DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.SaveEntity(folderTypeEntity, true); } DoTest(folderTypeEntity, folderTypeEntity); }
I'll have a re-read of the problem to understand it better.
Marcus
If you instead of saving rootfolder, save the foldertype, it should work, as foldertype is the real head of the graph, not rootfolder.
The problem is that the graph of objects to save has multiple paths leading to the same entity. The save routine uses recursion to walk a path and stops if there are no more entities to save, either because there aren't any left in that path or because the entities to save are already in progress and by stopping the recursion these will be saved. Because the order in which the entities have to be saved is build into the code, this never fails.
There is one exception: when the recursive routine should back off doing things because somewhere in the graph it will run into it again anyway. In this situation this is when foldertype is saved and it starts saving entities depending on that foldertype (all other entities in this case). It shouldn't do that, as these will be saved anyway using another graph path: rootfolder<-childs <- grandchilds.
The problem is though: when do you know that? That's very hard to tell. A human looking at a picture of a graph can tell that, but a computer with just one node in its hands and a couple of relations is not able to do that easily.
I thought about this a bit, and there is a solution, but it's cumbersome: first traverse the complete graph of objects, create a queue of entities in the right order and then save these front to back. The sorting of the entities is then a problem. It's not hard to figure out of entity A should be saved before entity B if B depends on A. It gets more problematic if you have something like this: (A <- B means B depends on A, so A has to be saved first)
E->Z->Y->X->W->V-> A<-B<-C<-D<-E
This is in fact a 2 path graph which starts with E and has two paths to A. Obviously, the first entity to save here is A, then down to B, C, D and V, W, X, Y, Z and then E. The current routine can save this graph as it starts at one side (left E) and when it saved D it sees E is already in progress and rolls back to the start of the recursion (E->Z) and saves E.
to sort this graph in the right order is a bit cumbersome I think, especially as other branches are present like C depends on another set of entities and X is the dependent entity for a couple of entities. The problem is: when you arrive at A, using one of the paths from E (or worse, what if you start with W), where to place V ? Before E? You don't know. Placing it right after A is probably right, but it might be that's not the case (I think, I don't have proof for that hunch ).
Anyway, I'll put it on the todo to investigate a different saving core loop, to see if it will make a difference.
I have modified the DB to expand out the tree for the purpose of testing... I have added a ChildFolder and GrandchildFolder table which exactly the same as the Folder table to remove the self-referencing element.
I have re-run the test to see if the problem was due to the table being self-referencing (same problem). I have also changed the Name field to be more meaningfull as you did and now having re-read your explaination I can see exactly what you mean. Only the grandchildren fail to save correctly...
However:
Otis wrote:
This is due to the fact that there are multiple paths in the object graph (which is saved) from a startpoint to an end point: folder type -> child 0 and foldertype -> grandchild 0 of child 0 -> child 0 (and further to root folder , which is also pointer to by foldertype).
This can't be the case as I am now saving the FolderType before calling save on Folder... So there is no graph for the purpose of the save as its only a tree... as FolderType is not included in the save.
The only different between the save that succeeds and the save that fails is that the FolderTypeEntity in the failure scenario is the same instance as the FolderTypeEntity for child and grandchild. Supplying a different instance of the same DB Entity solves the problem. This leads me to believe that there might be a problem with the internal hashtables.
This test passes:
[Test]
public void ExpandedDifferentInstancesOfFolderTypeSucceeds()
{
ResetCounters();
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderTypeEntity1, true);
FolderTypeEntity folderTypeEntity2 = new FolderTypeEntity(folderTypeEntity1.FolderTypeID);
adapter.FetchEntity(folderTypeEntity2);
DoTest2(folderTypeEntity1, folderTypeEntity2);
}
}
I can send you the new TestSuite and SQL scripts if you want, but I think the problem is isolated to using different instance of the FolderTypeEntity.
Does this give you anything new or is this implicit in what your explaination above?
Marcus
It's not a problem with the internal hashtables, it's a problem with multiple paths in the graph leading to the same entity, which confuses the save routine because AFTER an entity is saved (the end of both paths) it saves its depending entities. if one of these is already in progress it skips that entity (in this case rootfolder) and proceeds with the next (in this case child 0). As child 0 depends on rootfolder (!) it can't be saved just yet, as a depending entity has to be saved, but because root folder is already in progress (and will be saved later on and therefore thus also child 0) child 0 is not saved just yet because rootfolder is not saved at that moment so it exists (with true, to avoid causing teh routine to exit). Foldertype then proceeds with grandchild 0 of child 0, and that one depends on child 0, so it saves that one first. the whole thing repeats itself: child 0 can't be saved now, will save later. It does this by simply doing nothing and return true. (it can't abort the save action, that would abort everything) Because grandchild 0 of child 0 receives true from the save action of child 0, it thinks child 0 is saved, (but it isn't ) and proceeds.
This then causes a NULL in the FK to parent, which is child 0, but that's not saved yet.
One way to solve this particular scenario is to tell a save routine taht the entity the current entity depends on is saved LATER. The current entity to be saved should then also abort as it is saved later on as well. This can be accomplished by adding an entity earlier to one of the tracking hashtables (there are a couple, each has a different purpose: participating in transaction, currently in progress etc.). However this isn't a good fix, as the REAL fix would be: everything that is saved by rootfolder after rootfolder is saved should be marked 'in progress' as soon as rootfolder is 'in progress'. This is however very tricky in complex branching graphs.
calling the save twice will also solve this though: After child 0 is saved, it syncs its PK with the FK in grandchild 0. Saving the graph again will update grandchild 0 with the new value. (so:
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folder));
Assert.IsTrue(adapter.SaveEntity(folder));
}
Ok, this is a 5 minute think about saving the graph, if this is STUPID, don’t laugh, just smile and be polite … We have an expression in English that you may be familiar with and I feel it’s entirely appropriate to mention it here. “Don’t try to teach your grandmother to suck eggs” http://www.worldwidewords.org/qa/qa-tea1.htm explains...
Saving a graph:
1) Get a list of nodes in the graph which represent dirty entities and any entities which are dependent on an entity whose PK is about the change. Call this the Dirty list
2) Sort this list based on the entity with the least dependents, INSERTs first, UPDATEs second. The first node in the Dirty list is the first entity to save.
3) If this entity is not saveable i.e. some NOT NULL fields are still NULL, skip to the next. If we reach the end of the list, we have a loop and the save cannot be performed.
4) Save the entity and update dependent entities in the Dirty list by setting their corresponding FK ID to the PK ID of the saved entity.
5) If the saved entity has some missing fields. i.e. FKs not yet known, place this Entity into a re-save list, otherwise remove it from the Dirty List.
6) Repeat step 2
7) If re-save list contains items, repeat step 2 with re-save list in place of Dirty list.
The thinking here is quite different from how you described the current save works and is entirely based on selecting the most appropriate Entity to save first based on the sort in step 2. I can see situations where entities might need to be saved twice, but this would be much more preferable than a save failure that doesn’t throw an exception.
Of course I am a complete novice regading the intricacies of OR Mapping and therfore may only be considering a small Of course I am a complete novice regarding the intricacies of OR Mapping (hence the eggs expression) and therefore may leaving out a ton of issues that I am not aware of…
hehe, typing simultaneously again...
Edit:
Otis wrote:
calling the save twice will also solve this though: After child 0 is saved, it syncs its PK with the FK in grandchild 0. Saving the graph again will update grandchild 0 with the new value. (so:
using (DataAccessAdapter adapter = new DataAccessAdapter()) { Assert.IsTrue(adapter.SaveEntity(folder)); Assert.IsTrue(adapter.SaveEntity(folder)); }
Works for the Self-Referencing table only... The expanded tables still fails with this workaround. Sorry.
No, it works: (I only run the Fails test)
using System;
using NUnit.Framework;
using SD.LLBLGen.Pro.ORMSupportClasses;
using Test;
using Test.DatabaseSpecific;
using Test.EntityClasses;
using Test.FactoryClasses;
using Test.HelperClasses;
namespace NUnitTest
{
/// <summary>
/// Summary description for Class1.
/// </summary>
[TestFixture]
public class TestSuite
{
static void Main()
{
}
[Test]
public void Succeeds()
{
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
FolderTypeEntity folderTypeEntity2 = CreateFolderType();
DoTest(folderTypeEntity1, folderTypeEntity2);
}
[Test]
public void Fails()
{
FolderTypeEntity folderType = CreateFolderType();
folderType.Name = "Test type: " + DateTime.Now.ToLongTimeString();
DoTest(folderType, folderType);
}
private static void DoTest(FolderTypeEntity folderType1, FolderTypeEntity folderType2)
{
FolderEntity folder = CreateFolder(null, folderType1);
folder.Name = "Rootfolder";
string name = folder.Name;
// create hierarchy. Create 3 children and each child has 3 children.
for (int i = 0; i < 3; i++)
{
FolderEntity childFolder = CreateFolder(folder, folderType2);
childFolder.Name = "Child " + i;
//folder.ChildFolderCollection.Add(childFolder);
for (int j = 0; j < 3; j++)
{
FolderEntity grandchildFolder = CreateFolder(childFolder, folderType2);
grandchildFolder.Name = string.Format("Grandchild {0} of {1}",j,i);
//childFolder.ChildFolderCollection.Add(grandchildFolder);
}
}
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folder));
Assert.IsTrue(adapter.SaveEntity(folder));
}
FolderEntity fetchedFolder = new FolderEntity();
fetchedFolder.Name = name;
IPrefetchPath2 prefetch = new PrefetchPath2((int)EntityType.FolderEntity);
prefetch.Add(FolderEntity.PrefetchPathChildFolderCollection).SubPath.Add(FolderEntity.PrefetchPathChildFolderCollection);
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.FetchEntityUsingUniqueConstraint(fetchedFolder, fetchedFolder.ConstructFilterForUCName(), prefetch);
}
Assert.AreEqual(3, fetchedFolder.ChildFolderCollection.Count);
foreach (FolderEntity childFolder in fetchedFolder.ChildFolderCollection)
{
Assert.AreEqual(3, childFolder.ChildFolderCollection.Count);
}
}
private static FolderEntity CreateFolder(FolderEntity parentFolderEntity, FolderTypeEntity folderTypeEntity)
{
FolderEntity folder = new FolderEntity();
folder.ParentFolder = parentFolderEntity;
folder.FolderType = folderTypeEntity;
//folder.Name = Guid.NewGuid().ToString();
return folder;
}
private static FolderTypeEntity CreateFolderType()
{
FolderTypeEntity folderType = new FolderTypeEntity();
//folderType.Name = Guid.NewGuid().ToString();
return folderType;
}
}
}
works. Also if I replace
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folder));
Assert.IsTrue(adapter.SaveEntity(folder));
}
with
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folderType1));
}
it works as well. I'm not sure why it doesn't work at your place.
Yes agreed. With the self-referencing table it does indeed work.
I made some further tests which created seperate ChildFolder and GrandchildFolder tables to eliminate the self-referencing table, just to see how it would behave.
The results I get after the save are:
select * from Folder select * from ChildFolder select * from GrandchildFolder
FolderID FolderTypeID ParentFolderID Name
897 152 NULL Root Folder 0
(1 row(s) affected)
ChildFolderID FolderTypeID ParentFolderID Name
73 152 897 Child 0 74 152 897 Child 1 75 152 897 Child 2
(3 row(s) affected)
GrandchildFolderID FolderTypeID ParentFolderID Name
316 152 75 Grandchild 0 317 152 75 Grandchild 1 318 152 75 Grandchild 2 319 152 75 Grandchild 3 320 152 75 Grandchild 4 321 152 75 Grandchild 5 322 152 75 Grandchild 6 323 152 75 Grandchild 7 324 152 75 Grandchild 8
(9 row(s) affected)
All Granchildren get their ParentFolderID set to Child2.
ExpandedFails if the only test that still fails...
using NUnit.Framework;
using SD.LLBLGen.Pro.ORMSupportClasses;
using Test;
using Test.DatabaseSpecific;
using Test.EntityClasses;
namespace NUnitTest
{
/// <summary>
/// Summary description for Class1.
/// </summary>
[TestFixture]
public class TestSuite
{
private int _folderCount = 0;
private int _childCount = 0;
private int _grandchildCount = 0;
private int _folderTypeCount = 0;
[Test]
public void Succeeds()
{
ResetCounters();
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
FolderTypeEntity folderTypeEntity2 = CreateFolderType();
DoTest(folderTypeEntity1, folderTypeEntity2);
}
[Test]
public void Fails()
{
ResetCounters();
FolderTypeEntity folderTypeEntity = CreateFolderType();
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderTypeEntity);
}
DoTest(folderTypeEntity, folderTypeEntity);
}
[Test]
public void ExpandedSucceeds()
{
ResetCounters();
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
FolderTypeEntity folderTypeEntity2 = CreateFolderType();
DoTest2(folderTypeEntity1, folderTypeEntity2);
}
[Test]
public void ExpandedFails()
{
ResetCounters();
FolderTypeEntity folderTypeEntity = CreateFolderType();
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderTypeEntity);
}
DoTest2(folderTypeEntity, folderTypeEntity);
}
[Test]
public void ExpandedDifferentInstancesOfFolderTypeSucceeds()
{
ResetCounters();
FolderTypeEntity folderTypeEntity1 = CreateFolderType();
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(folderTypeEntity1, true);
FolderTypeEntity folderTypeEntity2 = new FolderTypeEntity(folderTypeEntity1.FolderTypeID);
adapter.FetchEntity(folderTypeEntity2);
DoTest2(folderTypeEntity1, folderTypeEntity2);
}
}
private void DoTest(FolderTypeEntity folderTypeEntity1, FolderTypeEntity folderTypeEntity2)
{
FolderEntity folderEntity = CreateFolder(folderTypeEntity1);
string name = folderEntity.Name;
for (int i = 0; i < 3; i++)
{
FolderEntity childFolderEntity = CreateFolder(folderTypeEntity2);
folderEntity.ChildFolderCollection.Add(childFolderEntity);
for (int j = 0; j < 3; j++)
{
FolderEntity grandchildFolderEntity = CreateFolder(folderTypeEntity2);
childFolderEntity.ChildFolderCollection.Add(grandchildFolderEntity);
}
}
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folderEntity));
Assert.IsTrue(adapter.SaveEntity(folderEntity));
}
FolderEntity fetchedFolder = new FolderEntity();
fetchedFolder.Name = name;
IPrefetchPath2 prefetch = new PrefetchPath2((int)EntityType.FolderEntity);
prefetch.Add(FolderEntity.PrefetchPathChildFolderCollection).SubPath.Add(FolderEntity.PrefetchPathChildFolderCollection);
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.FetchEntityUsingUniqueConstraint(fetchedFolder, fetchedFolder.ConstructFilterForUCName(), prefetch);
}
Assert.AreEqual(3, fetchedFolder.ChildFolderCollection.Count);
foreach (FolderEntity childFolder in fetchedFolder.ChildFolderCollection)
{
Assert.AreEqual(3, childFolder.ChildFolderCollection.Count);
}
}
private void DoTest2(FolderTypeEntity folderTypeEntity1, FolderTypeEntity folderTypeEntity2)
{
FolderEntity folderEntity = CreateFolder(folderTypeEntity1);
string name = folderEntity.Name;
for (int i = 0; i < 3; i++)
{
ChildFolderEntity childFolderEntity = CreateChildFolder(folderTypeEntity2);
folderEntity.ChildFolders.Add(childFolderEntity);
for (int j = 0; j < 3; j++)
{
GrandchildFolderEntity grandchildFolderEntity = CreateGrandchildFolder(folderTypeEntity2);
childFolderEntity.GrandchildFolders.Add(grandchildFolderEntity);
}
}
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
Assert.IsTrue(adapter.SaveEntity(folderEntity));
Assert.IsTrue(adapter.SaveEntity(folderEntity));
}
FolderEntity fetchedFolder = new FolderEntity();
fetchedFolder.Name = name;
IPrefetchPath2 prefetch = new PrefetchPath2((int)EntityType.FolderEntity);
prefetch.Add(FolderEntity.PrefetchPathChildFolders).SubPath.Add(ChildFolderEntity.PrefetchPathGrandchildFolders);
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.FetchEntityUsingUniqueConstraint(fetchedFolder, fetchedFolder.ConstructFilterForUCName(), prefetch);
}
Assert.AreEqual(3, fetchedFolder.ChildFolders.Count);
foreach (ChildFolderEntity childFolder in fetchedFolder.ChildFolders)
{
Assert.AreEqual(3, childFolder.GrandchildFolders.Count);
}
}
private FolderEntity CreateFolder(FolderTypeEntity folderTypeEntity)
{
FolderEntity folderEntity = new FolderEntity();
folderEntity.FolderType = folderTypeEntity;
folderEntity.Name = "Root Folder " + (_folderCount++).ToString();
return folderEntity;
}
private ChildFolderEntity CreateChildFolder(FolderTypeEntity folderTypeEntity)
{
ChildFolderEntity folderEntity = new ChildFolderEntity();
folderEntity.FolderType = folderTypeEntity;
folderEntity.Name = "Child " + (_childCount++).ToString();
return folderEntity;
}
private GrandchildFolderEntity CreateGrandchildFolder(FolderTypeEntity folderTypeEntity)
{
GrandchildFolderEntity folderEntity = new GrandchildFolderEntity();
folderEntity.FolderType = folderTypeEntity;
folderEntity.Name = "Grandchild " + (_grandchildCount++).ToString();
return folderEntity;
}
private FolderTypeEntity CreateFolderType()
{
FolderTypeEntity folderTypeEntity = new FolderTypeEntity();
folderTypeEntity.Name = "Folder Type " + (_folderTypeCount++).ToString();;
return folderTypeEntity;
}
private void ResetCounters()
{
_folderCount = 0;
_childCount = 0;
_grandchildCount = 0;
_folderTypeCount = 0;
}
}
}
CREATE TABLE [dbo].[ChildFolder] (
[ChildFolderID] [int] IDENTITY (1, 1) NOT NULL ,
[FolderTypeID] [int] NOT NULL ,
[ParentFolderID] [int] NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Folder] (
[FolderID] [int] IDENTITY (1, 1) NOT NULL ,
[FolderTypeID] [int] NOT NULL ,
[ParentFolderID] [int] NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[FolderType] (
[FolderTypeID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[GrandchildFolder] (
[GrandchildFolderID] [int] IDENTITY (1, 1) NOT NULL ,
[FolderTypeID] [int] NOT NULL ,
[ParentFolderID] [int] NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ChildFolder] ADD
CONSTRAINT [PK_ChildFolder_1] PRIMARY KEY CLUSTERED
(
[ChildFolderID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Folder] ADD
CONSTRAINT [PK_Folder] PRIMARY KEY CLUSTERED
(
[FolderID]
) ON [PRIMARY] ,
CONSTRAINT [UC_Folder_Name] UNIQUE NONCLUSTERED
(
[Name]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FolderType] ADD
CONSTRAINT [PK_FolderType] PRIMARY KEY CLUSTERED
(
[FolderTypeID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ChildFolder] ADD
CONSTRAINT [FK_ChildFolder_Folder1] FOREIGN KEY
(
[ParentFolderID]
) REFERENCES [dbo].[Folder] (
[FolderID]
) ON DELETE CASCADE ON UPDATE CASCADE ,
CONSTRAINT [FK_ChildFolder_FolderType] FOREIGN KEY
(
[FolderTypeID]
) REFERENCES [dbo].[FolderType] (
[FolderTypeID]
)
GO
ALTER TABLE [dbo].[Folder] ADD
CONSTRAINT [FK_Folder_Folder] FOREIGN KEY
(
[ParentFolderID]
) REFERENCES [dbo].[Folder] (
[FolderID]
),
CONSTRAINT [FK_Folder_FolderType] FOREIGN KEY
(
[FolderTypeID]
) REFERENCES [dbo].[FolderType] (
[FolderTypeID]
)
GO
ALTER TABLE [dbo].[GrandchildFolder] ADD
CONSTRAINT [FK_GrandchildFolder_ChildFolder] FOREIGN KEY
(
[ParentFolderID]
) REFERENCES [dbo].[ChildFolder] (
[ChildFolderID]
) ON DELETE CASCADE ON UPDATE CASCADE ,
CONSTRAINT [FK_GrandchildFolder_FolderType] FOREIGN KEY
(
[FolderTypeID]
) REFERENCES [dbo].[FolderType] (
[FolderTypeID]
)
GO
Marcus wrote:
Ok, this is a 5 minute think about saving the graph, if this is STUPID, don’t laugh, just smile and be polite
… We have an expression in English that you may be familiar with and I feel it’s entirely appropriate to mention it here. “Don’t try to teach your grandmother to suck eggs” http://www.worldwidewords.org/qa/qa-tea1.htm explains...
![]()
Saving a graph:
1) Get a list of nodes in the graph which represent dirty entities and any entities which are dependent on an entity whose PK is about the change. Call this the Dirty list
A list of all the dirty entities in a graph is enough, the save routine will fire an event if an entity has been saved succesfully, which will cause the sync of the FK's of related entities with the PK of the entity just saved (if any).
2) Sort this list based on the entity with the least dependents, INSERTs first, UPDATEs second. The first node in the Dirty list is the first entity to save.
No, you have to sort on dependency: if A is depending on B, B has to be saved first, then A. true, you have to split them into inserts and updates, and first do inserts, then updates. The amount of dependents is not important, it's important that the list is ordered in such a way that an entity at position n isn't depending on an entity in [0, n-1]. For a single path this is easy, for a complex graph this can be harder, though the current code walks the graph already so the routine could be split up in an entity collector routine and a true save routine. However it will change the meaning of the 'SaveEntity()' routine which will break applications which override that method.
The fear I have is that it's very hard to prove if the routine is correct and works for all graphs. The current routine is solid and falls apart only in very weird graphs which are often a result of unconventional usage of a relational database, and/or saving the graph not using the top node.
I.o.w.: if I re-implement it, it might cause a lot of problems.
3) If this entity is not saveable i.e. some NOT NULL fields are still NULL, skip to the next. If we reach the end of the list, we have a loop and the save cannot be performed.
4) Save the entity and update dependent entities in the Dirty list by setting their corresponding FK ID to the PK ID of the saved entity.
5) If the saved entity has some missing fields. i.e. FKs not yet known, place this Entity into a re-save list, otherwise remove it from the Dirty List.
6) Repeat step 2
7) If re-save list contains items, repeat step 2 with re-save list in place of Dirty list.
The thinking here is quite different from how you described the current save works and is entirely based on selecting the most appropriate Entity to save first based on the sort in step 2. I can see situations where entities might need to be saved twice, but this would be much more preferable than a save failure that doesn’t throw an exception.
Entities only need to be saved twice in the case of a loop and I don't support loops in models (i.e.: A -FK-> B and B -FK-> A) as these are bad design. In all other cases there is a solution which can save all entities in one go, as long as the order is correct. I have to think a bit deeper on this to find a solid way to determine the right order of all entities to be saved.
Of course I am a complete novice regarding the intricacies of OR Mapping (hence the eggs expression) and therefore may leaving out a ton of issues that I am not aware of…
heh Well, the recursive save routines are one of the more complex issues of o/r mapping. Because there is no 'graph of relations' present in LLBLGen Pro, as everything is disconnected from any 'context' or 'session' object (because that would give issues in distributed scenario's etc.) the entities themselves supply information on their related entities and these are used during the recursive save action. How the routine is constructed, this works ok, it can decide what to do based on the current state, the current to save entity and its direct related entities. It can't look further ahead, which is why it fails in your graph, IF you save the graph not from the root-node. Saving your graph using foldertype works.
Otis wrote:
The current routine is solid and falls apart only in very weird graphs which are often a result of unconventional usage of a relational database, and/or saving the graph not using the top node.
I guess this is my problem then, not saving the graph from the top node...
I will do an audit of my code to ensure that I always save from the top node. This is going to be tricky as I have utility methods which save entities for me and I will need to check the object graph in order to determine what is the top node for a given set of fetched entities...
For instance, if FolderType is not fetched, I would save FolderEntity, but if FolderType is fetched, then I would need to save FolderType... Is this assumption correct?
You only need this on wicked graphs with multiple paths and this is very rare so I'd leave all code as is and only change code which bugs. You always have to save foldertype first in this particular exceptional case.
Otis wrote:
You only need this on wicked graphs with multiple paths and this is very rare so I'd leave all code as is and only change code which bugs. You always have to save foldertype first in this particular exceptional case.
Sorry to drag this out... but I need to understand the rule. If FolderType is not fetched and I want to update the FolderEntity tree, in order to save correctly, I need to fetch FolderType first, and save that?
using (DataAccessAdapter adapter = new DataAccessAdapter())
{
if (folderEntity.FolderType == null)
folderEntity.FolderType = (FolderTypeEntity)adapter.FetchNewEntity(new FolderTypeEntityFactory(), folderEntity.GetRelationInfoFolderType());
adapter.SaveEntity(folderEntity.FolderType);
}
Of course there may be many folderTypes attached to different folderEntities within the tree... so do I first need to create a list of them by recursing the graph and save each unique FolderType in this way? That doesn't sound right...
Looking at this expanded schema (not sure if the img tag is working) I can't see that it is unusual? This is not my schema (I use a single Folder table) but this also fails the tests as I mentioned above.
Marcus wrote:
Otis wrote:
You only need this on wicked graphs with multiple paths and this is very rare so I'd leave all code as is and only change code which bugs. You always have to save foldertype first in this particular exceptional case.
Sorry to drag this out... but I need to understand the rule. If FolderType is not fetched and I want to update the FolderEntity tree, in order to save correctly, I need to fetch FolderType first, and save that?
using (DataAccessAdapter adapter = new DataAccessAdapter()) { if (folderEntity.FolderType == null) folderEntity.FolderType = (FolderTypeEntity)adapter.FetchNewEntity(new FolderTypeEntityFactory(), folderEntity.GetRelationInfoFolderType()); adapter.SaveEntity(folderEntity.FolderType); }
Of course there may be many folderTypes attached to different folderEntities within the tree... so do I first need to create a list of them by recursing the graph and save each unique FolderType in this way? That doesn't sound right...
That's indeed cumbersome. You can also save the graph twice. The first time, grandchilds will have NULL for their parents, the second time only grandchilds are updated.
Looking at this expanded schema (not sure if the img tag is working) I can't see that it is unusual? This is not my schema (I use a single Folder table) but this also fails the tests as I mentioned above.
The unusual part is that there are 3 entities defined for the same semantic entity, a folder, and therefore creating the second path in the graph at runtime. Normally you'd have 1 table, like you had.
Still that doesn't solve it indeed, but I can't solve it at this time. I need to implement a new save routine with a 2 stage routine, which will take some time to get it right as this is very complex stuff and proving a save routine is correct in all graphs possible is no picknick, plus I have to keep the current API working as defined, which means that if you calls SaveEntityCollection() there will be calls made to SaveEntity() which some people have overriden to do last minute maintenance.
Otis wrote:
That's indeed cumbersome. You can also save the graph twice. The first time, grandchilds will have NULL for their parents, the second time only grandchilds are updated.
Okay I've tested this with the self-referencing table schema I'm using and it works, so I will use this method for now. Bear in mind for anyone else, saving twice does not work for the schema in the diagram image (above), this test fails even with a double save, as I mentioned previously.
Otis wrote:
Still that doesn't solve it indeed, but I can't solve it at this time. I need to implement a new save routine with a 2 stage routine, which will take some time to get it right as this is very complex stuff and proving a save routine is correct in all graphs possible is no picknick, plus I have to keep the current API working as defined, which means that if you calls SaveEntityCollection() there will be calls made to SaveEntity() which some people have overriden to do last minute maintenance.
I completely understand the complexity of this and am happy with the workaround. Obviously saves that fail and don't report as such are a big issue... Might it be an idea to add the double save to the core so it's hidden from the average user until a more permanent fix is available? The second save would do nothing normally right?
And thanks for your lengthy assistance on this issue. I'm always comfortable in the knowledge that support is first class. (Even if you didn't fix it )
Well I can't stand it when there are bugs in my code so I can't stand it with this either. THough I think I found a way to fix it with the current code: if a save is performed on a related entity (like which is the case in grandchild's case which first needs to save child) and somewhere along that path an entity was postponed, the entity itself is also postponed, in this case grandchild, which will always work as grandchild will always be saved by child. Which solves the problem
I'll try some code tomorrow