Mudules and Schema Extensions Question

Posts   
 
    
Evan
User
Posts: 67
Joined: 02-May-2012
# Posted on: 09-Dec-2014 16:33:34   

I know similar questions have been asked, and I've been searching through the forums reading through them. We're in a position where we'd like to make our application modular as well as allow for extensions of the core product. As far as I can tell, we need to decide if we will keep all of LLBLGenPro together or split it up into different groups/assemblies. If we split it up, then we lose entity-to-entity relationships and/or we have to have entities redundantly exist in multiple assemblies.

I hate to give up the associations between modules or between the core and the extensions. Also, I assume that means you can't even query the same. Like, I couldn't query .Where(customEntity => customEntity.Product.Name == "Apples") if Product exists in the core and CustomEntity in the extension correct? There are some other things I assume would not be possible. Including: *An extension/customer wouldn't be able to add a column to a core entity *An extension/customer wouldn't be able to load a core entity and prefetch/access custom related entities

Keeping it all together seems like it may be better based on the limitations above. If we keep it all together though, then we wouldn't truly be modular and each extension would have to be merged whenever schema changes were made to the core. Has anyone had success with this? So, the idea is that each customer that wants to make schema changes would be given basically a copy of the LLBLGenPro project for their version. They would build this in their own solution and deploy over the .dll that ships with their version. Whenever they need to deploy an upgrade, they would need to manually merge the .llblgenproj with the one shipped with the release. They they need to regenerate/rebuild/redeploy their customized version.

We considered maintaining each extension ourselves, but it seems like it would be a painful development model for the customer to have to send us our their schema changes and for us to send them back a .dll containing their modifications.

Is there any other option that we're missing? Any other recommendations? It seems like there my be some options around defining Groups and extending Entities(Target Per Entity and TargetPerEntityHierarchy were mentioned). Are there any good examples or articles to read up on those? Would they help?

Thanks much, --Evan

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 10-Dec-2014 07:03:36   

Hi Evan,

I'd create multiple projects if the # of classes is too big. Often in these large schemas, it's not hard to recognize the different subsystems which aren't related. So in other words: it's unlikely you have a fully connected entity model of 1200 entities so you can traverse from any entity to any other entity in that 1200 entities.

Another tip: with large schemas, compile code on the command line: it's much faster. Reference the generated code as an assembly in your own project. This way vs.net isn't bogged down by the code.

If you use Adapter templateSet, it's easy, for instance, to create a DBGeneric with all the entities related, then you can have one DBSpecific project per each subsystem. It's also possible the other way: having groups of entities (multiple DBGeneric projects) and a single DBSpecific project.

IMHO, I prefer to keep entities in a single project, if they are not too many. I share this assembly across all my solutions. I just use the DBSpecific project in the Business layer. As soon as you move up, abstracting services, it could be more convenient to separate them into groups, using, maybe another representation like DTOs, but that's totally up to your scenario an decisions. It's always a dilemma, but you have to find the balance, and that depends upon some variables like: scalability, project scope, service layer, etc.

David Elizondo | LLBLGen Support Team
Evan
User
Posts: 67
Joined: 02-May-2012
# Posted on: 03-Apr-2020 18:44:30   

I know this is a super-old thread, but I was curious if there were any new features or suggestions out there. This continues to give us quite a bit of heartache. We continue to add tables/schema's that are really part of different modules of our overarching product. These tables/schemas often reference tables that are ALWAYS included in the core product. We also continue to customize each instance for each of our clients. These customizations also have tables that relate to our core tables and/or schemas in our modules.

Our current process is to just have one monolithic project for our entire schema and in our custom implementations we have to manually sync up every core product table into our custom monolithic project. Then we just slam the custom .dll on top of our core .dll. If any relationship has the wrong name, etc, then we get runtime errors. If we change a module that our custom solution DOES NOT EVEN NEED, we also can get runtime errors.

We would be VERY interested in a solution where we could have entities live in Assembly2 that reference entities in Assembly1.(where Assembly1 has no knowledge of Assembly2) I realize this may take a bit of abstraction to accomplish, but I feel like the code-generation aspect of LLBLGen Pro may make it a bit easier...

In an ideal state I could do the following: 1. Make a core schema change, alter my db schema, upgrade/deploy my core .dll. 2. Make a module schema change, alter my db schema, upgrade/deploy my module .dll. 3. Make a custom schema change, alter my db schema, upgrade/deploy my custom .dll.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39763
Joined: 17-Aug-2003
# Posted on: 06-Apr-2020 09:51:51   

One way to do this is the following. You use different groups in the designer, and set the Group usage setting to 'as separate projects'.

This will make having relationships between the groups not possible, but there's a workaround for that: if you have 3 groups, and say 'Product' is in group A. In group B there's an entity 'Order' which refers to 'Product'. The 'Product' in the context of 'Order' is used in a readonly fashion: you maintain 'products' in a different part of the application than where you maintain orders (as 'orders' are e.g. filed by users on a website). So you reverse engineer the 'product' entity again in the group B, relate it to Order and set it's allowed actions to 'readonly' (on the mappings of Product').

As A.Product and B.Product are mapped to the same table, updating the product using A.Product will have an effect on B.Product's instances and you can maintain B.Product through A.Product. (you can name B.Product however you want btw, doesn't need to be 'product')

This way you can have multiple 'models' on the same DB. The only thing you have to take care of is that the entities you use in a read/write fashion are the 'core' of the group. So in my example above, 'Order' is a core entity of B. If orders have to be maintained/updated/deleted etc., B.Order has to be used. If group C contains the Customer entity and it has to have a relationship with Order, I can reverse engineer 'Order' in the group C, relate it to Customer (I also don't need to have all fields in Order, as it's used in a readonly fashion), and when I fetch C.Customer with its orders, I get the same data as B.Order maintains as C.Order is mapped on the same table.

If you then sync the relational model data, all are updated if e.g. a field is added to the Orders table in the DB: both B.Order and C.Order will receive that new field.

Frans Bouma | Lead developer LLBLGen Pro