- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Distributed database issues when using LLBLGenPro
I realize that the term "distributed database" can mean different scenarios for different folks, let me describe my particular scenario and how I have solved some of the issues.
I have a Winforms application for doing a completions job on an oil well. The engineer who uses this will usually be in a remote part of the world. It's ok to require him to connect to a central server via the internet when he starts a job, but after that he will need the ability to work offline. For various reasons when he creates a new completion job we want that information to be in the central database right away, even though it may take him months to finish the job and upload all of the data for later analysis.
So, I have a completion table
CREATE TABLE Completion
(
CompletionID int IDENTITY
CONSTRAINT PK_Completion PRIMARY KEY,
...
)
Of course there are other fields, including FKs. When the user creates a completion job, he may also need to create some of these FKs. For example, it may be a new well, so he has to create the well record and borehole record and then the completion.
The process will be
User enters in all information on client. Entities are built and sent via remoting to server. Server saves to database Server refetches and sends back to client Client saves locally.
Of course in this scenario it is logical to generate the PK on the server and send that back to client. I could use GUID and generate PKs on the client, but then I have all these big ugly GUIDs everywhere when a simple 4 byte int works just fine. There are a couple of tables where I do use GUIDs because I have to, but here I don't have to. I could cop out and do it, but that is not how I work. I don't write sloppy joe code. My competition does that, which is why I am doing this job and not my competition. The competition got fired.
Speaking of getting fired, whoever at MS came up with IDENTITY_INSERT should have been fired. If they had said, "some programmer or DBA somewhere may want to prevent users from explicitly inserting key values", that's fine, they could have provided an option for that. But instead they punished everybody (including themselves, because they had to deal with this when the implemented replication by adding the "NOT FOR REPLICATION" clause). Notice they don't do it when you use NEWID or for that matter any other way of generating primary keys on the server, whether done by SQL Server itself or in code. I just cannot come up with any rationale for doing this. Although lots of people have written articles about problems dealing with IDENTITY fields, I don't see anybody pointing out the obvious, which is that MS just made a mistake.
Anway....once the completion entity is saved on the server, how do I save it on the client, since I have IDENTITY on the client as well? Of course, I could leave this off in my client script. But then I have two (slightly) different schema, which introduces a small admin headache. Plus, do the DALs generated by LLBLGenPro have to be different? I believe so, otherwise, one of them will think that IDENTITY is present when it's not, or vice versa.
So, I leave IDENTITY intact on client. But LLBLGenPro doesn't let me tell the DAL at runtime to use IDENTITY_INSERT. So, as suggested in another thread, I subclassed DataAccessAdapter and overrode OnSaveEntity. I realized then I not only have to prepend and append the SET statements, but also I would need to modify the sql statement itself, because (of course) LLBLGenPro doesn't try to insert the PK field, and it's asking for the generated PK back from SQL Server in the same statement. So there's a bit more work to do doctoring the sql, and of course I have to do this for every entity that has IDENTITY and is being sent from server (could be a lot - there are a lot of small lookup tables).
I also remembered while doing this that the user account I'm using must be dbo or member of that role, or he can't issue SET IDENTITY_INSERT. Thanks, Microsoft. I can do that, but I'm not 100% certain the client won't later tell me that that is not going to be allowed.
At this point I realize that I'm going to have to give up IDENTITY. I dig up a CD with a program I wrote ten years ago that has TSQL to safely generate PKs in a stored proc (yes, it can be safely done). With that out of the way, I find that there is another issue to deal with.
When my entities come back from the server, they are not new or dirty. That makes sense, because they have been fetched on the server, but now they are passed back to the client, where I need to save them again. I found another thread with a suggestion by Frans to handle this very problem
http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=1543
However, I had to modify the code slightly because you don't want to set fields that are null to dirty, because then they will be included in the insert statement.
So here is my utility function to make an entity "new" again:
private void MakeNew(EntityBase2 entity)
{
entity.IsNew = true;
entity.IsDirty = true;
entity.Fields.IsDirty = true;
for (int i = 0; i < entity.Fields.Count; i++)
{
EntityField2 field = (EntityField2)entity.Fields[i];
if (!field.IsNull)
field.IsChanged = true;
}
}
I'm not sure why the fields collection itself has a flag, when each individual field also has a flag. Also, it strikes me as inconsistent to have both terms "IsDirty" and "IsChanged" instead of just one, but maybe there's a good reason for all that. In any event, this code seems to do the trick.
Here is an excerpt from my BLL function to take a new completion entity, save it remotely, then make it new again and save it locally. There are actually several related entities and some if/then logic, because not all the entities may be non-null, and some may already exist on the client. So when you're doing this, you have to think of all those possiblities, obviously.
public void SaveCompletion(CompletionEntity completion)
{
DataAccessAdapter adapter = new DataAccessAdapter();
CompletionEntity newCompletion = RemotingServer.Instance.SaveCompletion(completion);
if (newCompletion.CompletionRig != null)
MakeNew(newCompletion.CompletionRig);
if (newCompletion.CompletionRigless != null)
MakeNew(newCompletion.CompletionRigless);
foreach (CompletionGoalEntity completionGoal in newCompletion.CompletionGoal)
MakeNew(completionGoal);
MakeNew(newCompletion);
adapter.SaveEntity(newCompletion);
}
I would be very interested in hearing if anybody else has done something similar (and perhaps better).
I'm not sure why the fields collection itself has a flag, when each individual field also has a flag. Also, it strikes me as inconsistent to have both terms "IsDirty" and "IsChanged" instead of just one, but maybe there's a good reason for all that. In any event, this code seems to do the trick.
For the simple reason that IsDirty signals if an entity is dirty (all an entity's data is in the entityfields object), and if an entity has 100 fields, I don't have to check 100 IsChanged flags, just 1 flag. With a big collection, this can severily increase performance.
The IsChanged flags are required too though, as they're used to check for which fields update/insert code has to be generated.
So IsDirty-> should this entity be saved at all? IsChanged -> should this field get update/insert logic in the query to generate?
IsDirty is processed in the ORMSupportClasses core, IsChanged is processed in the DQE.