Derived Models, Reuse Models/Classes

Posts   
 
    
mprothme avatar
mprothme
User
Posts: 80
Joined: 05-Oct-2017
# Posted on: 30-Nov-2021 18:34:46   

Say I have the following entities:

  1. Employee
  2. Address
  3. Company

In this case a company can point to an Address and can be referenced by multiple Employee records.

Is there any way to generate derived models for Employee and Address individually, and have a derived model Company reference those derived models in its generated code, that way I don't end up having an Employee model class and what amounts to a Company.Employee Class?

Thanks!

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 01-Dec-2021 07:37:17   

Hi there,

AFAIK, there is no built-in way to reuse derived inner classes types in other derived models. I think it's safer that way because (after all) they are DTO that can change and you should have the ability to modify DTOs depending of the purpose. If you really need them you have to do it manually in your generated code.

David Elizondo | LLBLGen Support Team
Walaa avatar
Walaa
Support Team
Posts: 14986
Joined: 21-Aug-2005
# Posted on: 01-Dec-2021 09:12:54   

It's designed this way for flexibility. If DTOs would reference each other, then you will be stuck with the same DTO (all fields) in all referencing DTOs, which is not the case most of the time. In the current design, you get to chose which fields you want from related DTOs to be used per each referencing DTO, and this gives you the flexibility to create DTOs that cater for the business needs without any extra unneeded fields.

mprothme avatar
mprothme
User
Posts: 80
Joined: 05-Oct-2017
# Posted on: 01-Dec-2021 21:04:21   

Walaa wrote:

It's designed this way for flexibility.

Yeah, I definitely understand the reasoning behind how it's currently implemented.

A bit of background. There's been a push recently in our organization to focus more on clean architecture, part of that involves making changes so that the core layer of our application can remain unaware of where data is coming from or the particular implementation of how it's loaded/persisted. Instead, that logic lives in an infrastructure layer and implements data access interfaces (defined in core) that the core layer retrieves through dependency injection.

We're using the adapter, and the initial idea was that the Adapter & Entity classes would live in the core, and the DB Specific would live in infrastructure. The entity classes themselves are used to define data access interfaces, and the DBSpecific project is responsible for providing IQueryables and other logic for pulling data. There does however seem to be a lot of LLBLGenPro specific (and seemingly persistence-specific) logic in those classes, and in general, they're fairly heavy-weight, which generated pushback.

On the other hand, the derived model classes are just DTO objects without any frills or extra functionality, and we were hoping to use those Derived Model classes instead of Entity classes. Unfortunately for our use case, unlike an entity, you can have many derived models that originate from the same entity, either at a top-level or as a child of another model. Even if the models have the exact same fields they end up as separate classes, which makes logic for persisting those models to actual entities in the infrastructure layer fairly complicated.

Our thought was that if we could configure LLBLGen Pro to use an existing derived model on nested objects (ex. Employee is defined once, and Company.Employee uses the top-level Employee Class), we could ensure that at a top-level only one model existed per entity. The end result is that there would be a single derived model class per entity, and if we needed to add, extend, or put other domain logic on those models we wouldn't have to do it in multiple places and maintain that everywhere. Additionally persisting that data becomes significantly easier because we'd be able to leverage a single method to map a model to an entity regardless of where it was.

If this isn't available we thought we'd just generate models for each entity at a top-level, and then using partial classes add nested models to each top-level model, but that's less ideal because querying becomes much more complex (instead of a single projection) so we were hoping it would be in some way.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39760
Joined: 17-Aug-2003
# Posted on: 02-Dec-2021 09:47:00   

mprothme wrote:

Walaa wrote:

It's designed this way for flexibility.

Yeah, I definitely understand the reasoning behind how it's currently implemented.

A bit of background. There's been a push recently in our organization to focus more on clean architecture, part of that involves making changes so that the core layer of our application can remain unaware of where data is coming from or the particular implementation of how it's loaded/persisted. Instead, that logic lives in an infrastructure layer and implements data access interfaces (defined in core) that the core layer retrieves through dependency injection.

We're using the adapter, and the initial idea was that the Adapter & Entity classes would live in the core, and the DB Specific would live in infrastructure. The entity classes themselves are used to define data access interfaces, and the DBSpecific project is responsible for providing IQueryables and other logic for pulling data. There does however seem to be a lot of LLBLGenPro specific (and seemingly persistence-specific) logic in those classes, and in general, they're fairly heavy-weight, which generated pushback.

It's a story we've heard a 1000 times before (not your fault), POCO's are better! etc. To be able to use POCO's in a persistence situation, they're either proxy'd or you need an abstraction layer around them which both have disadvantages too. Our entity classes have code, sure, but there are no surprises. The code that runs at runtime, is visible to all, not proxy'd away or abstracted away by a layer of indirection. People who request these kind of things usually don't maintain the code that results from their requests, otherwise they'd stop.

On the other hand, the derived model classes are just DTO objects without any frills or extra functionality, and we were hoping to use those Derived Model classes instead of Entity classes. Unfortunately for our use case, unlike an entity, you can have many derived models that originate from the same entity, either at a top-level or as a child of another model. Even if the models have the exact same fields they end up as separate classes, which makes logic for persisting those models to actual entities in the infrastructure layer fairly complicated.

Yes, if you use them in these scenario's it's not a good fit. They're meant to be used as a derived model with a different use context, through projections. This is a key concept: the entity instances, representing an instance of an abstract entity definition, are projected to instances of definitions of another kind. They're not 'entities', which aren't representing abstract entity definitions. If we share the types among the derived elements, you'll get breaking code every time you denormalize a field as it then has to split the type into two.

It's a commonly made error to see two instances of two classes which both contain the same set of bytes that resembles a row in some table as the same thing but that's not correct. Every projection means you cross a boundary into another real where the source of the projection has no meaning and the target of the projection is the one that gives meaning to the data. The data might have the same shape / form, but that doesn't have to be the case. It's this concept why a derived model is called a derived model and not just an entity model: it's a model that derives from a model definition and obtains its data through projections, so it can give meaning to that data, after projecting, in the context of the derived model. Derived models have different rules, they can have denormalized fields for instance.

Our thought was that if we could configure LLBLGen Pro to use an existing derived model on nested objects (ex. Employee is defined once, and Company.Employee uses the top-level Employee Class), we could ensure that at a top-level only one model existed per entity. The end result is that there would be a single derived model class per entity, and if we needed to add, extend, or put other domain logic on those models we wouldn't have to do it in multiple places and maintain that everywhere. Additionally persisting that data becomes significantly easier because we'd be able to leverage a single method to map a model to an entity regardless of where it was.

If this isn't available we thought we'd just generate models for each entity at a top-level, and then using partial classes add nested models to each top-level model, but that's less ideal because querying becomes much more complex (instead of a single projection) so we were hoping it would be in some way.

But... all this for 'cleaner code', without a realistic definition of what that even means? simple_smile (again, not your fault). The work you're trying to do is not bringing anything useful to the table. What it effectively does is trying to make our framework a POCO based framework, with a subpar persistence route (as persisting POCO's is the real problem here). Because conceptually, the only way to make this work, to some extend, is if the entity model is generated as poco classes (and persistence is outsourced to an abstracted away layer)

Besides that, one of the problems is how deep will you go with your derived elements? Customer 1:n Order 1:n OrderDetails m:1 Product. In the Customer derived element, to be able to fetch that graph I have to add all of these. However if I also in some case want to fetch Order 1:n OrderDetails I have to define a second derived element with these elements.

As you can see, the derived elements aren't 'just DTO's', but specific graphs (derived element with embedded derived element(s) etc.) which should be tailored to the scenarios they're used in.

If I read your post correctly, I can understand why a decoupling might be handy in some cases so clients don't depend on what the services use for persistence, as long as the Json works (example). But isn't it better to have well defined services with implementation agnostic interfaces which can work with derived element types so the clients / consumers of these services don't know how/where the data comes from, but the services themselves know how to route the dto instances from / to the backend for the various use cases? (i.o.w.: the typical thin client <-> service route, so client <-json-> service, where service is really: service <-derived elements-> backend, where backend knows about entities, but derived elements are defined for the use case at hand)

The thing is, say you have enough of our little framework and you'll go 'all poco' with ef core 6. The service then has to know about EF Core 6 too otherwise it can't save any entity. (or has to know what's used so it knows the characteristics of how the entity objects should be treated). Somewhere something will leak the abstracted away persistence layer as it's impossible things will be 'ORM agnostic'. With the use case specific derived elements, you can pass tehm through the service to a layer where entities are known, where the code knows how to deal with them and things aren't abstracted away.

Abstraction layers are nice if they let you focus on what is important. I get why people feel the urge to wrap ORM using code in an abstraction layer to make the code 'more clean' or whatever they'll call it, but that's only making things harder to use while also having to write a lot more code. Code you all have to maintain till the app is sunset.

Frans Bouma | Lead developer LLBLGen Pro
mprothme avatar
mprothme
User
Posts: 80
Joined: 05-Oct-2017
# Posted on: 02-Dec-2021 18:41:35   

Thanks for the detailed reply!

Otis wrote:

The thing is, say you have enough of our little framework and you'll go 'all poco' with ef core 6. The service then has to know about EF Core 6 too otherwise it can't save any entity. (or has to know what's used so it knows the characteristics of how the entity objects should be treated). Somewhere something will leak the abstracted away persistence layer as it's impossible things will be 'ORM agnostic'. With the use case specific derived elements, you can pass tehm through the service to a layer where entities are known, where the code knows how to deal with them and things aren't abstracted away.

So we get that changing the infrastructure layer would involve re-writing how we handled and persisted data. The intent we have is that, regardless of technology:

  1. the core layer works with and exposes interfaces leveraging POCO objects (defined in the core layer)
  2. the infrastructure layer is responsible for returning POCO and Persisting POCO objects to the database, and for implementing the data access interfaces the core layer exposes

If the above 2 items are true, then we at least isolate infrastructure changes to the infrastructure layer and don't have to change core if we swap our ORM technology.

An infrastructure layer leveraging LLBLGen Pro could:

  1. Enable data querying by generating POCO IQueryables using built-in projection extensions on the base entity
  2. Enable persistence by mapping POCO objects back to entity instances, again using the persistence classes.

If (and I hope this doesn't happen) we were to switch away from LLBLGen Pro, that would mean that we'd

  1. Start writing the POCO objects by hand, which would obviously suck but would be doable.
  2. Write code that, using whatever the next thing was (EF Core, Dapper, whatever) mapped queried data to POCO objects, and mapped POCO data back to whatever structure the framework used for persistence

If we used entity classes instead of POCO's we have the following worry: I cannot imagine writing an entity class by hand. This leads to the following 2 problems:

  1. We'd need to convert and change all of our exposed interfaces in the core layer, or use existing entities and POCO objects for new structures. Both options aren't fun.
  2. There are a lot of nice QOL features in the Entity Classes themselves that I think people and code would become dependent on or would be using without even being aware (for example, assigning a related entity object automatically sets FK id fields), so if we did rewrite everything, we'd have to find and pull all of these out.

I think in the end we'd like to separate the infrastructure layer as much as possible from the core layer, and make the core layer as framework agnostic as possible (understanding that's hard to do completely). Having the ability to maintain a derived model that mirrors the underlying database structure (using the designer to control the depth of data fetching), but where only one instance of a model exists per entity, would help us with that. I think we can possibly do all of this by hand in the designer, it just means that there ends up being a lot more boilerplate code or extended code to help with persistence. Further, if we want to extend our models using a partial class we really have to find all derived models based on that entity and add partial classes for them.

I definitely understand why what I asked about isn't a feature though, thanks for answering all of my questions as always!!!

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39760
Joined: 17-Aug-2003
# Posted on: 03-Dec-2021 11:04:07   

There are a lot of nice QOL features in the Entity Classes themselves that I think people and code would become dependent on or would be using without even being aware (for example, assigning a related entity object automatically sets FK id fields), so if we did rewrite everything, we'd have to find and pull all of these out.

This is one of these things indeed that makes an ORM leak its characteristics through into the layers wrapping it, another one is

myOrder.Customer = myCustomer;
// this is now true:
myCustomer.Orders.Contains(myOrder);

This isn't the case for most poco frameworks. There are many others.

Our designer comes with a fast, detailed code generation system, you can generate the code you want with some templates, perhaps that's an idea? Writing classes by hand is not really something you should be worried about, you can always generate them.

At the moment we can't provide the DTO classes you're looking for for our framework. It's up to you to decide where you draw the line where 'agnostic code' starts/ends. But no matter where you draw that line tho, it'll leak, things will depend on other things you might not want but accept because in the end, the main goal is to create an app that works.

We've rarely seen people change an ORM in an existing application btw. It's always good to strive for using as less dependencies as possible, however it shouldn't become a goal on its own as not taking a dependency on XYZ will involve making compromises that involve work and churn too. (so it's not easier in the end, it might even be more complicated).

Frans Bouma | Lead developer LLBLGen Pro