- Home
- LLBLGen Pro
- Architecture
Unit of Work and Identity Maps
Joined: 04-Feb-2004
Does anyone have any thoughts, tips, or samples when attemting to implement the "Unit Of Work" and "Identity Map" patterns with LLBLGen objects?
In a nut shell, "Unit of Work" maintains a list of objects affected by a business transactoin and coordinates the writing out of changes and the resolution of concurrency problems.
The "Identity Map" pattern ensures that each object gets loaded only once, keeping every loaded object in a map. It looks up objects using the map when referring to them.
Thanks in advance.
Joined: 17-Aug-2003
Devildog74 wrote:
Does anyone have any thoughts, tips, or samples when attemting to implement the "Unit Of Work" and "Identity Map" patterns with LLBLGen objects? In a nut shell, "Unit of Work" maintains a list of objects affected by a business transactoin and coordinates the writing out of changes and the resolution of concurrency problems.
I dropped this, because it's non-intuitive in a lot of scenario's. (well, I didn't drop it, I implemented it differently )
You can see the UnitOfWork as the selfservicing Transaction object: you add entities and collections to it which you want to participate in 1 transaction.
However it goes further than that: if you want an entity to participate in a graph, you add it to the referencing entity and it will be persisted in the same transaction (recursive saves).
I find that more intuitive than 'unit of work', which requires you to say to some broker "this is an object I'm working on", while that broker can figure that out by itself when you call some method, like 'SaveEntity()' or 'Save()' and it checks if that object is changed.
This is also the reason why there isn't a 'StartTracking()' kind of method or 'StartUnitOfWork()' or other nonsense. The developer pulls the objects from the db when he/she sees fit and works on these.
The "Identity Map" pattern ensures that each object gets loaded only once, keeping every loaded object in a map. It looks up objects using the map when referring to them.
I dropped this too because it is not doable in .NET, unless you restrict yourself to 'uniquing' in a single BL (Business Logic) transaction (a sequence of actions in the BL, often spanning a lot of objects and windows). The reason for this is that in for example ASP.NET applications, you have appDomain recycling. This means that although you have 1 application, you can have multiple appDomains, different users can be in different appDomains, which means that objects will get loaded more than once when this happens. If your code relies on absolutely unique objects, you will burn your hands.
This is also the reason why in the documentation a lot is described about where entities live, what an in-memory instance of entity data really means (a mirror) and that there is no way you should rely on the fact that an in-memory entity instance is unique in the complete application. A 'transaction' is therefore not a start of a complete serie of actions on entities in memory, including user input etc. but an atomic unit with actions on entities in the database.
Make no mistake: every product on .NET which claims working (!) Identity map implementation, uniqueing and other fancyness is telling a lie, unless they've also implemented a distributed, cross appdomain, cross thread cache system. (no-one has)
A BL transaction is a concept which isn't existing in LLBLGen Pro, only with Selfservicing and COM+ support (Adapter doesn't have COM+ support due to lack of demand. If people need it I'll add it). As a BL transaction is something which has to be absolutely solid, it can't be done unless the transaction broker is fail proof. COM+ is in some extend, but we'll see true BL transactions first with Longhorn and MBF. With a fail proof BL transaction I mean a transaction which can deal system crashes, works distributed etc.. Otherwise you don't need a BL transaction broker.
A BL transaction is therefore often 'imitated' by developers with a sequence of actions and the real transaction is at the end, using an ADO.NET transaction. So how the developer reaches that point is not important, he just has to supply hte data to the ADO.NET transaction. You can see this as a 'shopping cart' presented to a customer for example. The customer spends 40 minutes on the website, and in the end he finalizes the order(s) or goes away. Should these 40 minutes be 1 BL transaction (you should see it that way in the world of UnitOfWork/Identity Map)? Or should there be no BL transaction and just an ADO.NET/LLBLGen Pro transaction in the end when the customer finalizes the orders?
As you can see, this is not a walk in the park: there are plenty of things to come up with which will make the BL transaction of 40 minutes be a disaster to develop. And why should you, what matters is the last action: finalizing the data.
It's a bit like using a sourcecontrol system: Do you want to track each change you make in a file in the sourcecontrol system and in the end simply say "I'm done" (Clearcase), or do you simply let the developers check out files (or work on files in the case of subversion / cvs) and when they're done, check them in, merging changes? When you want to opt for the first, you have to be absolutely sure the underlying foundation is able to handle every disaster possible: crashing machines, dropping network connections, multiple users hammering the same data etc.
Only fat systems at banks (you know, these 30 year old NEC mainframes) and other large corps can deal with that. Every attempt on PC's is nice, but not 100% reliable. I then think: if it's not 100% reliable, it's thus not reliable, so you shouldn't provide the layer as being 'reliable' and thus shouldn't provide the layer at all.
You won't suffer a lot from this though, as most concepts which work with data (not the real time systems) are batch based: pull data out of DB, work on pulled data, put data back into DB. Done. Now, it depends on the O/R mapper vendor if he wants you to set up all kinds of stuff to see a batch as a BL transaction or will let you simply work with data as if it's in your hand and you're the only one on the system. It doesn't matter really, except the latter is much easier to use (dragging along the context in every action you do is more painful than you probably initially imagine). This is also the reason why I defined state in the DB, not in memory. In memory you have user-state, which you can see as the batch-in-progress, but also a whole lot of batches in progress (the user can log into the website, and perform 5 actions there. What's teh batch? when the user logs in till he goes away, or per action?)
(Jeff: no, I didn't plan any Fowler-esque stuff for june-july, just a lot of SQL support stuff . But the concepts behind Unit of work and identity map are ok, I tried to implement them as much as I was able to do. Identity map requires a distributed layer which is not possible at the moment, it's not a surprise MBF is postponed till Longhorn)
Joined: 26-Oct-2003
Otis wrote:
(Jeff: no, I didn't plan any Fowler-esque stuff for june-july, just a lot of SQL support stuff . But the concepts behind Unit of work and identity map are ok, I tried to implement them as much as I was able to do. Identity map requires a distributed layer which is not possible at the moment, it's not a surprise MBF is postponed till Longhorn)
Sorry, must have misinterpreted this reply from http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=723 pretty badly (not to mention being slightly overaggressive in delivery time )
Otis wrote:
What Jeff mentions is the composition of new classes with embedded entity classes, like a sales order which contains a customer, order and orderdetails (inside order). This is planned at the end of the year, after inheritance is implemented in full.
BTW, I don't think a BL transaction must be 100% reliable to be useful if the user/developer is warned of the possible consequences of using it. I know you don't like things like .StartTracking, but I would think the usefullness of such a construct would outweigh the potential for loss as long as no guarantee of reliability is given. You and I have had conversations before about this, but I still can't get over how much time this would save the average developer in maintaining delete buckets and such by simply implementing a .CancelChanges method. This may also go against your position that the only state is in the database, and that the entities should be used and saved as soon as possible, but I know that I use a Cancel button for a lot of things, and this simple functionality composes a significant portion of the time I spend on detail forms for example.
Jeff...
Joined: 17-Aug-2003
jeffreygg wrote:
Otis wrote:
(Jeff: no, I didn't plan any Fowler-esque stuff for june-july, just a lot of SQL support stuff . But the concepts behind Unit of work and identity map are ok, I tried to implement them as much as I was able to do. Identity map requires a distributed layer which is not possible at the moment, it's not a surprise MBF is postponed till Longhorn)
Sorry, must have misinterpreted this reply from http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=723 pretty badly (not to mention being slightly overaggressive in delivery time )
Otis wrote:
What Jeff mentions is the composition of new classes with embedded entity classes, like a sales order which contains a customer, order and orderdetails (inside order). This is planned at the end of the year, after inheritance is implemented in full.
No you didn't misinterpret it, it's however something else than plain BL transactions spanning multiple machines/processes etc. The concept of a 'business component' will be added.
BTW, I don't think a BL transaction must be 100% reliable to be useful if the user/developer is warned of the possible consequences of using it. I know you don't like things like .StartTracking, but I would think the usefullness of such a construct would outweigh the potential for loss as long as no guarantee of reliability is given. You and I have had conversations before about this, but I still can't get over how much time this would save the average developer in maintaining delete buckets and such by simply implementing a .CancelChanges method. This may also go against your position that the only state is in the database, and that the entities should be used and saved as soon as possible, but I know that I use a Cancel button for a lot of things, and this simple functionality composes a significant portion of the time I spend on detail forms for example.
Well, if I call ApplyChanges() I want all my changes to be applied, for 100% However, there is no 'application transaction broker' in .NET except COM+. I can't roll back changes in a lot of objects, unless I keep track of every change.
I agree that it might be sometimes a little cumbersome but don't forget that it also has a close relation with how the GUI is organized: if the GUI has to allow multiple undo's, temp saves, cross process wizards etc., you're looking at a very complex architecture. You and I probably will never use that, however when a feature is added which seems to do that, others might will use it. (try to).
Database work has to be reliable. If a developer stores data into a database in a transaction the actions have to be atomic, there can't be a compromise. When MBF was pushed back to Longhorn because they needed Longhorn features to get things done, I could only draw the conclusion: don't try to do it otherwise today.
Joined: 26-Oct-2003
Well, if I call ApplyChanges() I want all my changes to be applied, for 100%
Heh, can't argue with that. However, I guess when I said "no guarantee of reliability" I meant everything but the final persist to database; the final db transaction.
However, there is no 'application transaction broker' in .NET except COM+. I can't roll back changes in a lot of objects, unless I keep track of every change.
I don't understand what an 'application transaction broker' is, but yes, I want you to track every change . I think doing it once for the good of all mankind is a Good Thing, not to mention a good selling point.
I agree that it might be sometimes a little cumbersome but don't forget that it also has a close relation with how the GUI is organized: if the GUI has to allow multiple undo's, temp saves, cross process wizards etc., you're looking at a very complex architecture. You and I probably will never use that, however when a feature is added which seems to do that, others might will use it. (try to).
Yeesh. I don't think you need to be all things to all people, but some base functionality would go a long way, i.e. 1 level undo, transaction/activity tracking, .SaveChanges/.CancelChanges, etc. In terms of feature set, its a very low-glamour feature, but I bet it packs a whallop in the daily-time-saver-o-meter.
Jeff...
Joined: 04-Feb-2004
Otis, once again you make some very interesting points.
I like the way that llblgen currently works. ** Edit** Nor am I implying that LLBLGen should do these things. Just want to get a feel for how I can implement them using LLBLGen Objects as the transport.
I am at the point of trying to make my facade layers more usable and robust. Many times, I will have multiple objects being worked on within the phase of a usage scenario. Not neccessarily a database transaction, but a workflow. This is especially true in an MDI environment where 95% of all workflows stem from a root form. I am not proposing that LLBLGen handle UnitOfWork. However, I can definately see creating an object located in my facade, that is aware of the objects that were created within it so that it may commit changes at a later time, if infact the operator wants to save changes. If a workflow is working with objects that are all related and you have a proper object graph, then cascading saves work great. In the event that you need to skip around the ORM model, that is where I think the UnitOfWork pattern can help.
In regards to Identity Map. I suppose that I relate it to caching certain portions of a thick client application without having the HTTPCache object available. I completely agree that it will not work when multiple threads and sessions are involved (nor does Martin Fowler say that this pattern will handle multiple session updates.) I can see it being useful, once again in an MDI environment. Depending on the starting point, you might already have part of the object graph available in another MDI Child form. So if the data were in an identity map, there would be no need to hit the database again. Sure, if the data changes, you need to dump it and maybe reload it. I think it could also be useful for system configuration values that dont change that often but are used in internal decision making.
Just curious what your thoughts were.
Thanks again.
Jeff : Application Transaction Broker = Distributed Transaction Coordinator
Joined: 17-Aug-2003
jeffreygg wrote:
Well, if I call ApplyChanges() I want all my changes to be applied, for 100%
Heh, can't argue with that. However, I guess when I said "no guarantee of reliability" I meant everything but the final persist to database; the final db transaction.
Yeah, after your posting I thought about this a little. I think an object which uses a similar technique as the Transaction object does now (or DataAccessAdapter) could serve as a tracker: you add the object, and all calls are not directly done (save/delete) but queued. There are issues with this however.
However, there is no 'application transaction broker' in .NET except COM+. I can't roll back changes in a lot of objects, unless I keep track of every change.
I don't understand what an 'application transaction broker' is, but yes, I want you to track every change . I think doing it once for the good of all mankind is a Good Thing, not to mention a good selling point.
Application transaction broker is a distributed app aware object which controls a transaction among appDomains, objects, threads, processes and the like. What MS DTC does for COM+.
I agree that it might be sometimes a little cumbersome but don't forget that it also has a close relation with how the GUI is organized: if the GUI has to allow multiple undo's, temp saves, cross process wizards etc., you're looking at a very complex architecture. You and I probably will never use that, however when a feature is added which seems to do that, others might will use it. (try to).
Yeesh. I don't think you need to be all things to all people, but some base functionality would go a long way, i.e. 1 level undo, transaction/activity tracking, .SaveChanges/.CancelChanges, etc. In terms of feature set, its a very low-glamour feature, but I bet it packs a whallop in the daily-time-saver-o-meter.
Well, cancel changes is something I don't really see the point of. Just throw away the object will do, or not?
You have to be aware of the fact that the 'context' which tracks the changes has to be passed along to all methods you call, and what's more important: the shift from database centric entity storage is moved to memory-centric entity storage, which can bring problems on its own.
Edit: Jeff, I'd like to add that I appreciate your view on the matter, don't get me wrong. I also think that in the (near) future some features will be added which will make your life much more comfortable
Giving it some thought, I think with a derived class of DataAccessAdapter, where you override SaveEntity() DeleteEntity(), FetchEntity(), you can track the changes and store the calls as commands in an internal structure and play them back when calling a new method like 'ApplyChanges', effectively simply calling base class methods.
I'm also pleased with your feedback on the typed list. I think I can solve some of these probs in teh future as well.
Joined: 26-Oct-2003
Otis wrote:
Well, cancel changes is something I don't really see the point of. Just throw away the object will do, or not?
Yea, that's what I thought as well, except that the concept "CancelChanges" can mean something different than "RevertToDBValues", which is what would happen if you just destroy the object. Example,
If I'm working on an entity (A) that has sub-entities, I 1. Create the (A) entity 2. Then go to create a sub-entity (B) (in a different form), set the values, hit "OK", which brings me back out to (A). 3. Go back into (B) change some values, then hit "CancelChanges"
If I destroy the object, I lose everything in step 2. I just want to cancel what I did in step 3.
You have to be aware of the fact that the 'context' which tracks the changes has to be passed along to all methods you call
Isn't this just maintained inside the entities? I shouldn't have to pass that myself, right? I'm sure I'm misunderstanding this one...
and what's more important: the shift from database centric entity storage is moved to memory-centric entity storage, which can bring problems on its own.
Yea, and here is where I can see your reasonable argument against doing this kind of stuff. I recognize the need to maintain a single state of the entity, especially for cross-process stuff. However, if the only true state is in the DB, then editing, changing, rolling back changes, etc shouldn't be an issue since, the only time we're worried about state is after we commit to the DB.
...
After thinking about this for a little bit, I think I see where your problem is. Basically, the "farther" from the DB the user is making changes, the more likely we have concurrency and "state" issues. The more "memory-centric" the model is, the more the user and developer (and perhaps more importantly other users and developers) have to worry about whether the entity's state is consistent.
However, I just wonder how realistic this is. Perhaps, I understand your desire to keep LLBL's philosophy consistent and keep it as close to the DB as possible. I can respect this. However, I wonder if the reality is that real-world use of the system requires us to have in-memory representations of and processes on the entities. I can't help but wonder what sort of impact not having a "Cancel" button on my detail forms would have on the end user, and it is just this type of thing that requires me to hold state in-memory while the user decides what he really wants to do.
The alternative model that you would have is a chatty application that persists each change to each property the user makes as he makes it (TextBox.OnValidate = Entity.Save), with either no opportunity to Cancel, or with a bunch of buckets that are used to write over the DB records if the user does hit Cancel.
The problem with that is obvious in that the user never intended to communicate to the other users of the system that the changes he was making were to be considered permanent and to assume that it was safe to use them.
The user communicates to the system that the values he has entered are "persistable" when he hits the "Save" button, and not before. Thus, this requires an in-memory representation of the entity that he can play with at will, safe from the prying eyes of the other users until he he lets them know if it's ok to use it. Based on this, I don't see the harm of capturing his changes and allowing him to undo or persist them at will as the principle is that that is the only time he does have to do it. Since the "wall" is up protecting him from other users until he hits "Save", it seems as if the sky's the limit in terms of what the user may do with the entity he's working on - it's his entity.
In terms of duplicating this sort of process and mapping it onto properties and methods, it seems the following would be appropriate:
.BeginTracking 'Start tracking modifications to properties
.EndTracking 'Not sure if necessary as either a .Save or a .UndoChanges would implicitly call this. However, it is functionally distinct from either of these processes
.UndoChanges 'Revert to the values in the properties as they were before .BeginTracking was called.
.Revert 'Reload the entity's property values from the persistent storage. Perhaps different than destroying and recreating the entity?
Jeff...
Joined: 26-Oct-2003
Otis wrote:
Edit: Jeff, I'd like to add that I appreciate your view on the matter, don't get me wrong.
As always, I simply find it a privilege to be conversing with the author of a commerical software. I can't overestimate the value your support brings to your already incredible tool.
I also think that in the (near) future some features will be added which will make your life much more comfortable
Sweet!
Giving it some thought, I think with a derived class of DataAccessAdapter, where you override SaveEntity() DeleteEntity(), FetchEntity(), you can track the changes and store the calls as commands in an internal structure and play them back when calling a new method like 'ApplyChanges', effectively simply calling base class methods.
<Sigh> Sadly, I'm using Self-Servicing. Not that I feel I got the short end of the stick.
Jeff...
Joined: 17-Aug-2003
Devildog74 wrote:
Otis, once again you make some very interesting points. I like the way that llblgen currently works. ** Edit** Nor am I implying that LLBLGen should do these things. Just want to get a feel for how I can implement them using LLBLGen Objects as the transport.
Don't be sorry It is good to hear different views on how stuff should work. As I'm a person who severily hates academic discussions when it comes to real-life material, I am eager to avoid academic principles when I motivate design decisions however I not always do that, so it's good other people ventilate what they think and how they see things to get my own reasoning in a better perspective so I can adjust it.
There is one thing that I won't compromise on and that's consistency. Everything should be consistent, a method which has a name "DoThis" then 'this' should be done by that method, period.
How I see things related to for example UnitOfWork is that the core of the functionality of LLBLGen Pro's generated and generic code is the foundation these elements have to build on. So for example a UnitOfWork is a bucket of actions executing code that is already there, like saving an entity etc. So no core new element, but an add-on element.
I am at the point of trying to make my facade layers more usable and robust. Many times, I will have multiple objects being worked on within the phase of a usage scenario. Not neccessarily a database transaction, but a workflow. This is especially true in an MDI environment where 95% of all workflows stem from a root form. I am not proposing that LLBLGen handle UnitOfWork. However, I can definately see creating an object located in my facade, that is aware of the objects that were created within it so that it may commit changes at a later time, if infact the operator wants to save changes. If a workflow is working with objects that are all related and you have a proper object graph, then cascading saves work great. In the event that you need to skip around the ORM model, that is where I think the UnitOfWork pattern can help.
Be aware of the trap to move the entity 'habitat' ( ) from the db to the memory. This doesn't have to be a bad thing, if you are the only user of the system. It will result in problems if you are not the only user of the system.
It is key that you focus on how functionality is described, how the flow between business processes in your application is defined. A GUI is a visualizer for those processes but not the controller of these processes. This means that designing the structure of the application around the gui is often not the right thing to do. This is perhaps sounding a bit generic, but think about it when you look at your MDI gui and the business processes behind the GUI. Example: a business process is a unit which has a given state (semantically: the process is OR doing nothing, OR doing something in a serie of actions). That state is something you want to preserve during the run of the process. As you can see, that state isn't something that should belong in the GUI, as the business process isn't a part of the GUI.
Don't confuse this btw with the problems related to GUI programming. GUIs also have state, for example a wizard of 5 steps and the user is in step 4, you then have to remember the previous states before you can apply the changes. Users want to UNDO things, want to cancel a complete screen etc. Is cancelling a screen in the GUI changing a business process' state? No, not necessarily. The 'OK' action might have changed the business process state (or a couple of business processes' state: start them, kill them).
When you confuse the two, you run into problems: your objects which participate in the business processes' state have been altered by a GUI element and the user cancels that change, which means you have to revert the change in the Business process state. When that happens, it's wise to take a step back and make sure the two are separated.
In regards to Identity Map. I suppose that I relate it to caching certain portions of a thick client application without having the HTTPCache object available. I completely agree that it will not work when multiple threads and sessions are involved (nor does Martin Fowler say that this pattern will handle multiple session updates.) I can see it being useful, once again in an MDI environment. Depending on the starting point, you might already have part of the object graph available in another MDI Child form. So if the data were in an identity map, there would be no need to hit the database again. Sure, if the data changes, you need to dump it and maybe reload it. I think it could also be useful for system configuration values that dont change that often but are used in internal decision making.
Nothing stops you from caching these elements yourself I mean: you can store any object in a hashtable with a key you define.
The problem with this is that changes in the DB aren't tracked. So when the cached object does change, you don't see it. You only see it if you change it yourself. Is that safe? I don't think it is: only in 1 particular situation you are able to work with updated data, because you updated the object yourself.
Not that this has to be a problem always, after all, as soon as you pull data out of the database, it's stale. It depends on how long you drag it along to determine if the data's stale factor is influencing the correctness of your code.
Joined: 17-Aug-2003
jeffreygg wrote:
Otis wrote:
Well, cancel changes is something I don't really see the point of. Just throw away the object will do, or not?
Yea, that's what I thought as well, except that the concept "CancelChanges" can mean something different than "RevertToDBValues", which is what would happen if you just destroy the object. Example,
If I'm working on an entity (A) that has sub-entities, I 1. Create the (A) entity 2. Then go to create a sub-entity (B) (in a different form), set the values, hit "OK", which brings me back out to (A). 3. Go back into (B) change some values, then hit "CancelChanges"
If I destroy the object, I lose everything in step 2. I just want to cancel what I did in step 3.
Multi-level versioning on entity field values? So you can do 'SaveChanges("Name")' and after a while 'RollbackChanges("Name")' and you roll back to the changes set with the SaveChanges("Name") ?
Versioning of fields is planned for the runtime lib update in june-july, as concurrency control mechanisms sometimes want to use it. (now no versions are kept)
Keep in mind though that GUI-related stuff is not always something you want to clutter your class library with, but this is also important in business processes as I described above in a posting: an object participating in a BL object can then be rolled back to a state when something fails. Now rollback of data is already performed when a transaction fails, so the mechanism is already in place, it just has to be adjusted.
You have to be aware of the fact that the 'context' which tracks the changes has to be passed along to all methods you call
Isn't this just maintained inside the entities? I shouldn't have to pass that myself, right? I'm sure I'm misunderstanding this one...
No, that's just state of the entity itself. What I was refering to was: - delete entity A - create entity B - update entity C and D - fetch entity E, F and G - update B with data from F and G
after this action serie you want to persist these changes as a single unit. Now you do that by starting a transaction, issuing the actions inside the transaction and commit the transaction. With a unit of work, you simply call the actions in your code along the way and in the end you simply say: 'PersistChanges' or something similar. During your code, the actions weren't performed, they were tracked, logged as you will. When persistChanges is called, the actions are actually executed.
Obvious problems are: you delete an entity first, then use it again in a later action. You insert an entity which will violate a unique constraint however the entity with the same value is deleted later on. Should the layer sort this out, or should it simply give up and show the developer s/he's doing something not that bright? (I think there are scenario's where you can't sort it out as a layer)
and what's more important: the shift from database centric entity storage is moved to memory-centric entity storage, which can bring problems on its own.
Yea, and here is where I can see your reasonable argument against doing this kind of stuff. I recognize the need to maintain a single state of the entity, especially for cross-process stuff. However, if the only true state is in the DB, then editing, changing, rolling back changes, etc shouldn't be an issue since, the only time we're worried about state is after we commit to the DB.
True if you look solely at that aspect , but it forgets about the stale factor every data outside the DB has. If you base a decision on data you've read 4 hours before that decision and 10 users have altered that data already in the meantime, your decision can have severily consequences. It's hard to avoid any staleness of course, however by keeping the staleness as low as possible is key for success.
After thinking about this for a little bit, I think I see where your problem is. Basically, the "farther" from the DB the user is making changes, the more likely we have concurrency and "state" issues. The more "memory-centric" the model is, the more the user and developer (and perhaps more importantly other users and developers) have to worry about whether the entity's state is consistent.
Exactly
However, I just wonder how realistic this is. Perhaps, I understand your desire to keep LLBL's philosophy consistent and keep it as close to the DB as possible. I can respect this. However, I wonder if the reality is that real-world use of the system requires us to have in-memory representations of and processes on the entities. I can't help but wonder what sort of impact not having a "Cancel" button on my detail forms would have on the end user, and it is just this type of thing that requires me to hold state in-memory while the user decides what he really wants to do.
As I described above in another posting: keep the gui state separated from the business processes state(s). A business process isn't aware of a cancel button. It is just performing a serie of tasks.
The alternative model that you would have is a chatty application that persists each change to each property the user makes as he makes it (TextBox.OnValidate = Entity.Save), with either no opportunity to Cancel, or with a bunch of buckets that are used to write over the DB records if the user does hit Cancel.
By separating gui state from BL state, you will see that the BL is not doing anything until the user clicks OK. In fact, if the user clicks cancel, the BL will not even be called. The GUI collects required data for a BL process to perform its tasks. THe BL reports back to the GUI how things are going. After the process has been completed, the overal application state(!) is changed.
IF you want to be able to interrupt a BL process (and before you say: "I want to", consider the consequences, as you need COM+ and DTC to control it!) you have to be able to revert actions performed by the BL process before you interrupt it. This isn't a simple thing btw.
The problem with that is obvious in that the user never intended to communicate to the other users of the system that the changes he was making were to be considered permanent and to assume that it was safe to use them.
True. However when the user hits 'Ok' the user did that with a reason. You aren't asking "Are you sure?" after an OK click after all, so the user knows what he's doing.
The user communicates to the system that the values he has entered are "persistable" when he hits the "Save" button, and not before. Thus, this requires an in-memory representation of the entity that he can play with at will, safe from the prying eyes of the other users until he he lets them know if it's ok to use it. Based on this, I don't see the harm of capturing his changes and allowing him to undo or persist them at will as the principle is that that is the only time he does have to do it. Since the "wall" is up protecting him from other users until he hits "Save", it seems as if the sky's the limit in terms of what the user may do with the entity he's working on - it's his entity.
To connect this to my previous posting about gui state: the user works with data in the GUI state and when hitting save, the BL process starts its sequence with its own state. So what you need is a set of helper classes with your GUI state control. (or helper functionality).
In terms of duplicating this sort of process and mapping it onto properties and methods, it seems the following would be appropriate:
.BeginTracking 'Start tracking modifications to properties .EndTracking 'Not sure if necessary as either a .Save or a .UndoChanges would implicitly call this. However, it is functionally distinct from either of these processes .UndoChanges 'Revert to the values in the properties as they were before .BeginTracking was called. .Revert 'Reload the entity's property values from the persistent storage. Perhaps different than destroying and recreating the entity?
Except the UndoChanges, every revert or other action which goes back to original DB values is IMHO not that useful because it is a dupe of other functionality: UndoChanges of an initial save of the state will revert to the initial state of the object.
I think with a multi-level versioning in the enity fields object based on savepoints with names (which is not that hard to implement: hashtable with copies of entityfields . A rollback is simply setting the current entity fields to the entity fields object stored with the specified name) you can overcome all these probs.
Joined: 17-Aug-2003
jeffreygg wrote:
Otis wrote:
Edit: Jeff, I'd like to add that I appreciate your view on the matter, don't get me wrong.
As always, I simply find it a privilege to be conversing with the author of a commerical software. I can't overestimate the value your support brings to your already incredible tool.
<Sigh> Sadly, I'm using Self-Servicing. Not that I feel I got the short end of the stick.
I think a generic solution is better than some derived template hack So selfservicing will not be left out.
Joined: 26-Oct-2003
OOOOoooookayyyyyy. The light is starting to turn on now. The objects LLBLGen generates are for use in the business layer and below; they were never intended to solve my GUI needs. Right? This is why we keep going back and forth about the state of the entity. This also eliminates you having to get involved in messy GUI problems that could compromise the stability/consistency of your architecture which allows you to keep the data as sane as possible. Okay. Got it. Now, can you please start working on PLGen (Presentation Layer Gen) as soon as possible so that I can go back to not having to do anything myself? Thanks!
Jeff...
Joined: 04-Feb-2004
Otis Wrote:
When you confuse the two, you run into problems: your objects which participate in the business processes' state have been altered by a GUI element and the user cancels that change, which means you have to revert the change in the Business process state. When that happens, it's wise to take a step back and make sure the two are separated.
I totally agree with you. I prefer the MVC pattern personally. Which was going to be my next topic in the architechture section.
When I started this thread, I was initially thinking that I could host unitOfWork objects in the controller. UnitOfWork objects in the controller could call the required functionality of the services layer to "Do the work."
I was also thinking that after the unit of work was done, that a singleton subject object could update all of its observers that might be interested in the data that just changed.
I havent given much thought yet to multi-user scope at this point, but ultimately, I think it could be done.