- Home
- LLBLGen Pro
- Architecture
Domain Model architecture with ORMs
Joined: 11-Nov-2011
I am attempting to find a suitable architecture for our system and have been investigating the pros and cons of using an ORM. While I appreciate they are great for providing an object model that closely mirrors the database on a one-to-one basis, and for simplifying CRUD operations, I get the impression that it is not so straightforward to use them when the domain model (how you would like to interact with data within the application) does not mirror the data model (impedance mismatch - http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch)
We are using a layered architecture approach and already have a relational database model that is highly normalised and have used LLBLGenPro to generate the code entities. However, it would be preferable to access, present and manipulate the data through a set of classes that are populated with data from a number of related tables. This would then be displayed and manipulated as one within the application and updates passed back to the ORM which takes care of synchronising the changes across multiple source tables as one atomic operation. I can achieve this to some degree by using partial classes adding required properties to use the navigators, but this does tend to use excessive SQL calls, and requires quite a heavyweight class being passed through the system. This can be shielded to some degree using interfaces, but still seems far from ideal, and also increases coupling to the ORM framework. It also reduces the applicability of object-oriented principles within the model.
I have looked into creating a repository layer that pulls in data from the ORM entity classes and constructs a separate domain class instance (much like a POCO) and returns this to the service layer. It is then down to the repository layer to re-constitute the appropriate ORM entities, and call each save method, wrapped up in a transaction/unit of work. This appears to require a lot more awareness and consideration of change tracking, mapping and manual management.
ORMs appear to provide a good consistent interface to the database, but I think the claim that they bridge the gap between the object model and the data model is perhaps a little misleading. Can anyone offer any advice as to how we can easily use the ORM with an object-oriented domain model structure ?
Thanks for reading and looking forward to reading any advice you may have.
Joined: 21-Aug-2005
The claim doesn't contradict with the fact the impedance mismatches can exist in some business models.
Generally I tend to use a repository layer as you have described it.
Btw, you can use inheritance to construct entities from more than one entity.
Joined: 11-Nov-2011
So would you suggest using some form of mapping tool (such as AutoMapper) within a repository to 'flatten' the ORM enity tree structure to a domain model ?
Does this model fit well when there is a fairly deep entity hierachy that needs flattening, particularly when retrieving a collection of entities - I know there is pre-fetching facilities within LLBL, but from my prototyping and profiling, it seemed rather inefficient in terms of the number of executed SQL calls.
It would still be nice to know about change tracking on domain fields to limit the ORM entity re-construction and saving effort. I have previously used Entity Framework to generate self tracking POCOs - I'm guessing the same sort of idea will permit this facility too.
Thanks for reviewing.
Joined: 17-Aug-2003
petefitz wrote:
So would you suggest using some form of mapping tool (such as AutoMapper) within a repository to 'flatten' the ORM enity tree structure to a domain model ?
If your needs for altering data is not matching the entity model, you need another model (the 'M' in MVC, MVVM etc.) which is a projection of the entity model. Automapper can deal with that.
Though you have to wonder whether you really need to have a totally different model in your application: using a different model in theory means you work with different entities.
Does this model fit well when there is a fairly deep entity hierachy that needs flattening, particularly when retrieving a collection of entities - I know there is pre-fetching facilities within LLBL, but from my prototyping and profiling, it seemed rather inefficient in terms of the number of executed SQL calls.
How is it inefficient? It uses 1 query per graph node, which is in general more efficient than using joins to create a flattened set, as joins in these cases create a lot of duplicates of the data, blowing up the resultset and processing time of the resultset.
It would still be nice to know about change tracking on domain fields to limit the ORM entity re-construction and saving effort. I have previously used Entity Framework to generate self tracking POCOs - I'm guessing the same sort of idea will permit this facility too. Thanks for reviewing.
Our entities are change tracking themselves so that's covered. What you could look into is the feature 'field mapped onto related field'. It requires the related entity to be present (eager loaded) but you can edit these fields in the scope of the related entity. Don't overdo this though, they're meant for displaying related field values in the scope of the core entity (e.g. Order.Customer.CompanyName in the scope of the Order) in databinding scenarios for example. One could use these for lookups.
Be aware that with every layer of abstraction comes overhead. If you think you need a different model for your application, you need to transform the entity model in memory to the new model and back. As it's unclear what the differences are between the two models in your application, it's not possible to suggest to you what the best choice might be for you, though I'd like to suggest to look into using the entity objects in the application directly, and use adapter. The adapter entity classes are not persistence aware, so you need the repository for persistence, and the entity classes themselves are usable in e.g. databinding scenarios, do validation and change tracking: all that code is there for you to use: covering it up with yet another layer of abstractions 'just because someone said so' is IMHO not the way to go. Abstractions are sometimes necessary, but do have to serve a purpose (e.g. make understanding and working with the code easier). Using another model and convert back/forth between these models is IMHO not one of these.