- Home
- LLBLGen Pro
- Architecture
Hierarchy implimentation
Joined: 17-Jul-2007
I'm evaluating LLBLGen Pro for use by my company. We have an entity model approach at prototype level which implements exactly what we need. In our model we develop the entity model first and then generate the database and entity runtime from the meta-model. This is important because we build the entity model as part of a standard product which is then deployed to customer sites which may be SQL Server or Oracle.
We would prefer to use a commercial product for this rather than our own code however. So I am trying to determine if LLBLGen Pro can be made to fit this need.
Because LLBLGen Pro starts with the database I am planning to retain the front end of our model designer and let it generate the database. We would then let LLBLGen Pro work from there. I am running into one big issue I don't quite understand.
The data model we are looking at consists of a lot of abstract levels at the top with concrete entities at the nodes. So to extend your example we might have.
People Customers Employees
Where 'People' is completely abstract. In our model this means there are database tables for Customers and Employees (TargetPerEntity), but we create no database table for People as there are no people who are not customers or employees. We do create a view to help implement queries against that level which unions customers and employees.
How would I do this using LLBLGen Pro? Are tables necessary of abstract levels?
Joined: 21-Aug-2005
The data model we are looking at consists of a lot of abstract levels at the top with concrete entities at the nodes. So to extend your example we might have.
People Customers Employees
Where 'People' is completely abstract. In our model this means there are database tables for Customers and Employees (TargetPerEntity), but we create no database table for People as there are no people who are not customers or employees. We do create a view to help implement queries against that level which unions customers and employees.
How would I do this using LLBLGen Pro? Are tables necessary of abstract levels?
First of all if you have a People table, you can configure LLBLGen Pro to generate an abstract class for it. To be inherited from the Customers & Employees classes.
Form the Object Oriented point of view, would the People model contain any functionality, common attributes....etc?
Anyway you can create an Interface and let the Customers and Employees classes implement that interface.
You'd modify the generated code, add the needed code to the following code region, so it won't get overwritten by subsequent code generations
// __LLBLGENPRO_USER_CODE_REGION_START AdditionalInterfaces
// __LLBLGENPRO_USER_CODE_REGION_END
Joined: 17-Jul-2007
Thanks for your reply
The People class has properties common to all people, but it is an abstract class who's purpose is to provide a common base for other classes such as Customers and Employees which have additional attributes specific to those classes. In our implementation we would not create a People table, only an Employee table and a Customers Table.
The runtime code generated for this model would create the class deriviations implied by this. E.G. Employee and Customer are derived from People. It would also allow queries against Employees, Customers and People, where the query against People would use a view to join Customers and Employees.
I'm assuming from your statements that LLBLGen Pro would fragment these entites across multiple tables and require "People" records for all Employees and Customers? Is that the case? Or is there some other reason you require a table for a completely abstract type.
Joined: 14-Dec-2003
I don't know how many records your tables contain, but in my experience (with Sql server) unions perform much more poorly than having (in your case) the people table with left joins to the customer and employee. As long as you don't have too many left joins.
Joined: 17-Jul-2007
Your statement may well be true but...
- We will have some tables with millions of records. (a very few). I would prefer to have good performance on these and trade off performance on smaller tables if necessary.
- The fragmented model requires a join every time you build the entity. The non-fragmented model requires neither a join nor a union when going to a non-abstract type (Employee). This will be very common and in our case we always go against the non-abstract type when navigating associations. So my guess is we will query single record entities more commonly than anything else.
- Our customers need good reporting access. Fragmented tables makes this more difficult unless there are supporting views for consistent entity selection. (Views are required for either model. We create views for abstract classes.)
- In our model, the abstract hierachy level is only represented by views in the DB. So we can make changes to the hierachy, adding and removing levels, without requiring changes to tables. We just recreate views and of course analyse affect on apps that might use abstract levels.
- To achieve our final goals, it is important that we be free to create a fairly rich hierachy of classes. E.G. we would really create
Party Organization People Employees Operators ShiftOperators
I have much deeper industry based examples. In the fragmented model every level
implies fragmentation. In the non-fragmented model the depth of the hierachy is not
particularly relevant. The number of final non-abstract types within a branch is
relevant but not how deep the tree goes.
I appreciate your responses. I am certainly not trying to argue that one approach is better or worse, only which approach seems to be fit our need. If I have mis-stated anything, especially concerning LLBLGen Pro's capabilities please connect me.
Joined: 17-Aug-2003
JamesAtHoneywell wrote:
I'm evaluating LLBLGen Pro for use by my company. We have an entity model approach at prototype level which implements exactly what we need. In our model we develop the entity model first and then generate the database and entity runtime from the meta-model. This is important because we build the entity model as part of a standard product which is then deployed to customer sites which may be SQL Server or Oracle.
We would prefer to use a commercial product for this rather than our own code however. So I am trying to determine if LLBLGen Pro can be made to fit this need.
Because LLBLGen Pro starts with the database I am planning to retain the front end of our model designer and let it generate the database. We would then let LLBLGen Pro work from there. I am running into one big issue I don't quite understand.
The data model we are looking at consists of a lot of abstract levels at the top with concrete entities at the nodes. So to extend your example we might have.
People Customers Employees
Where 'People' is completely abstract. In our model this means there are database tables for Customers and Employees (TargetPerEntity), but we create no database table for People as there are no people who are not customers or employees. We do create a view to help implement queries against that level which unions customers and employees.
How would I do this using LLBLGen Pro? Are tables necessary of abstract levels?
Yes, that's necessary, as LLBLGen Pro let you work with the entities which are defined in the db.
The problem with your model is the following: say I want to fetch all people from the db. If the o/r mapper has no notion of 'People' how am I suppose to do that? So i.o.w.: I do have to define 'People' in the context of the o/r mapper, and this is logical, as it's an entity in your system.
So going back to that, why is people not available in the db ? or better: why do you COPY attributes between customer and employee in the db, as this is redundant (denormalization to normalization level 2 or lower). The thing is: the 'People' entity contains attributes shared between Customer and Employee, e.g. Name, date of birth, sex etc. This means that it's easier to create a target table for this instead of copying the attributes into both tables.
What's the core reason you use an abstract entity 'People' which isn't persisted, if I may ask? Is this implementation oriented?
Very few o/rmappers support this, and if they do so, they do it via interfaces, so customer and employee both implement the IPerson interface for example and you can query on IPerson, which then will return employees and customers.
I can tell you up front: that's not going to perform well with millions of rows no matter what o/r mapper you pick. The only inheritance scenario which does perform is targetperentityhierarchy, as you then access a single table. Any other way is slow(er).
I know the person/customer/employee/manager/boardmember hierarchy we also use in the documentation is somewhat flawed: it's not a good candidate for inheritance, as it embeds the flaw that one instance can change type or become multiple types all of a sudden (e.g. customer becomes employee, employee becomes customer etc.). It is however a nice hierarchy to describe inheritance at the same time as it embeds a semantic hierarchy as well.
It's often better though to revert a role-based setup, where you don't use inheritance but use a set of roles and based on the role, the entity has different related data (which is accessable via traversal of properties), because that's flexible and you can change role at runtime without problems.
Joined: 17-Jul-2007
Thanks for the considered response. I've embedded by responses below.
_The problem with your model is the following: say I want to fetch all people from the db. If the o/r mapper has no notion of 'People' how am I suppose to do that? So i.o.w.: I do have to define 'People' in the context of the o/r mapper, and this is logical, as it's an entity in your system. _
People don't exist independently in my example. So 'people' is just a join between customers and employees. We represent it as a view on the database but that is just an implementation choice. It chould just as easily be commonly used join syntax. To facilitate external reporting tools we represent People as a view.
_ So going back to that, why is people not available in the db ? or better: why do you COPY attributes between customer and employee in the db, as this is redundant (denormalization to normalization level 2 or lower). The thing is: the 'People' entity contains attributes shared between Customer and Employee, e.g. Name, date of birth, sex etc. This means that it's easier to create a target table for this instead of copying the attributes into both tables. _
I understand your point here but we prefer self contained entity tables for several reasons. 1. Customers reporting against the database prefer it to fragmented tables which require a lot of joins. 2. We have several very large tables (up to millions of records) where performance would be a major issue with fragmented entities. 3. Our intent is to build a fairly deep, descriptive hierachy of classes, and in your model each level in the hierachy would lead to additional fragmentation. 4. Our model will probably be a bit fluid for a while. We can actually remove and add levels within the hierachy without changing existing table layouts. 5. In future we might see abstract classes involving entities from different sources (databases). In your model this would involving fragmenting entities across databases.
_ What's the core reason you use an abstract entity 'People' which isn't persisted, if I may ask? Is this implementation oriented? _
I believe most of the reasons include capability your model provides as well, but you asked so...
The primary reason for an abstract entity is to allow polymorphic programming. Code written against a 'People' entity type can be applied to entity instances of type customer and employee. The entity classes we generate allow this. It is also necessary to be able to query against "people" where the results are entities from both the customer and employee table. And finally we can build associations between the people entity type and other types. So people have "Locations" whether they are customers or employees. In our model the abstract people entity type can be queried, used in code and associated with other entities exactly as if they were concrete types.
_ Very few o/rmappers support this, and if they do so, they do it via interfaces, so customer and employee both implement the IPerson interface for example and you can query on IPerson, which then will return employees and customers.
I can tell you up front: that's not going to perform well with millions of rows no matter what o/r mapper you pick. The only inheritance scenario which does perform is targetperentityhierarchy, as you then access a single table. Any other way is slow(er).
I know the person/customer/employee/manager/boardmember hierarchy we also use in the documentation is somewhat flawed: it's not a good candidate for inheritance, as it embeds the flaw that one instance can change type or become multiple types all of a sudden (e.g. customer becomes employee, employee becomes customer etc.). It is however a nice hierarchy to describe inheritance at the same time as it embeds a semantic hierarchy as well.
It's often better though to revert a role-based setup, where you don't use inheritance but use a set of roles and based on the role, the entity has different related data (which is accessable via traversal of properties), because that's flexible and you can change role at runtime without problems._
I have to disagree that my model is less performance. In fact I am quite certain it will perform significantly better, especially with very large tables. But this is a difficult thing to argue as the implementation details, nature of the data and nature of the applications accessing the data all affect which model will perform better.
The trade-off is between requiring joins to construct entity instances and requiring unions to collect abstract groupings. In your model joining is required every time one gets an entity while in my model unions are only required when one queries the abstract level. A query against empoyees is a query against a single database table. I also believe that the most common database access will involve the navigation of associations between entity instances. That is, not querying for people but following an association such as "Location" from apeople instance to a location instance. Even though the association specification involved the abstract type of people rather than a concrete class such as customer, my implementation will know the association involves a specific type like customer and the actual query against the database will end up being a selection against the primary key of the single actual table that holds that entity.
Joined: 17-Aug-2003
JamesAtHoneywell wrote:
Thanks for the considered response. I've embedded by responses below. _The problem with your model is the following: say I want to fetch all people from the db. If the o/r mapper has no notion of 'People' how am I suppose to do that? So i.o.w.: I do have to define 'People' in the context of the o/r mapper, and this is logical, as it's an entity in your system. _ People don't exist independently in my example. So 'people' is just a join between customers and employees. We represent it as a view on the database but that is just an implementation choice. It chould just as easily be commonly used join syntax. To facilitate external reporting tools we represent People as a view.
Ok. So customer and employee are effectively empty, as all attributes are in people already, otherwise you can't show customer + employees in a single view, correct?
_ So going back to that, why is people not available in the db ? or better: why do you COPY attributes between customer and employee in the db, as this is redundant (denormalization to normalization level 2 or lower). The thing is: the 'People' entity contains attributes shared between Customer and Employee, e.g. Name, date of birth, sex etc. This means that it's easier to create a target table for this instead of copying the attributes into both tables. _ I understand your point here but we prefer self contained entity tables for several reasons. 1. Customers reporting against the database prefer it to fragmented tables which require a lot of joins. 2. We have several very large tables (up to millions of records) where performance would be a major issue with fragmented entities. 3. Our intent is to build a fairly deep, descriptive hierachy of classes, and in your model each level in the hierachy would lead to additional fragmentation. 4. Our model will probably be a bit fluid for a while. We can actually remove and add levels within the hierachy without changing existing table layouts. 5. In future we might see abstract classes involving entities from different sources (databases). In your model this would involving fragmenting entities across databases.
You can map a hierarchy onto a single table if you want to (with a discriminator column). Though I fail to see how your model can be mapped properly (with any mapper) onto the table model you're describing AND be performant AND flexible AND Be able to fetch the different entities based on type distinction. The reason is that a deep inheritance hierarchy in general leads to a deep table hierarchy OR a wide table with lots of nullable fields.
_ What's the core reason you use an abstract entity 'People' which isn't persisted, if I may ask? Is this implementation oriented? _ I believe most of the reasons include capability your model provides as well, but you asked so...
The primary reason for an abstract entity is to allow polymorphic programming. Code written against a 'People' entity type can be applied to entity instances of type customer and employee. The entity classes we generate allow this. It is also necessary to be able to query against "people" where the results are entities from both the customer and employee table. And finally we can build associations between the people entity type and other types. So people have "Locations" whether they are customers or employees. In our model the abstract people entity type can be queried, used in code and associated with other entities exactly as if they were concrete types.
If I read People==Person then this is the typical IPerson interface approach seen in some mappers (rarely though).
What you describe is typically done by using either a 3 table approach or a single table approach with a discriminator column. It's a simple NIAM hierarchical entity graph which is converted to a relational model. The thing is that you calculate in a db optimization where you do locate the data in two tables, not three. That's something we don't support indeed, we do require the 3rd table if you want to have customer and employee in separate tables.
If People != Person, something is weird: your supertype is plural, while the subtypes are singular. It's not a supertype actually, it's a set definition.
So the real hierarchy is: Person .....Customer .....Employee
and a set definition 'People' which contains customers and employees. People isn't a supertype in that scenario, Person is. People is never a single entity.
_ Very few o/rmappers support this, and if they do so, they do it via interfaces, so customer and employee both implement the IPerson interface for example and you can query on IPerson, which then will return employees and customers.
I can tell you up front: that's not going to perform well with millions of rows no matter what o/r mapper you pick. The only inheritance scenario which does perform is targetperentityhierarchy, as you then access a single table. Any other way is slow(er).
I know the person/customer/employee/manager/boardmember hierarchy we also use in the documentation is somewhat flawed: it's not a good candidate for inheritance, as it embeds the flaw that one instance can change type or become multiple types all of a sudden (e.g. customer becomes employee, employee becomes customer etc.). It is however a nice hierarchy to describe inheritance at the same time as it embeds a semantic hierarchy as well.
It's often better though to revert a role-based setup, where you don't use inheritance but use a set of roles and based on the role, the entity has different related data (which is accessable via traversal of properties), because that's flexible and you can change role at runtime without problems._
I have to disagree that my model is less performance. In fact I am quite certain it will perform significantly better, especially with very large tables. But this is a difficult thing to argue as the implementation details, nature of the data and nature of the applications accessing the data all affect which model will perform better.
I was referring to the typical IPerson query implementation in the few o/r mappers which support this. The main reason is the typical complex code necessary to do this.
The trade-off is between requiring joins to construct entity instances and requiring unions to collect abstract groupings. In your model joining is required every time one gets an entity
Sure it joins, how else are you going to get all instances of the type 'supertype' ? You have to, at least in the scenario where every entity has its own table/view.
However, your 'People' isn't a supertype of Customer. It's a set definition. IMHO something else.
while in my model unions are only required when one queries the abstract level. A query against empoyees is a query against a single database table.
Fetching all 'People' in your case only succeeds without complex code if the attributes of employee and customer are equal. Otherwise you have to take into account ALL types in the People set up front, and add additional empty columns to every table fetch to be able to perform a UNION.
Sure, fetching all employees is YET still a query against one table. What if you create a subtype of Employee which has an additional attribute and thus its own table? Fetching all employees then means a join between the two tables, not a union, unless you want to store all employee-table attribute data redundantly in that second table. Which causes headaches on its own, because PK uniqueness for example is hard to accomplish, as the tables operate on its own.
I also believe that the most common database access will involve the navigation of associations between entity instances. That is, not querying for people but following an association such as "Location" from apeople instance to a location instance. Even though the association specification involved the abstract type of people rather than a concrete class such as customer, my implementation will know the association involves a specific type like customer and the actual query against the database will end up being a selection against the primary key of the single actual table that holds that entity.
LLBLGen Pro will do that too, no matter what your inheritance type is: it uses the FK value(s) to obtain the related entity and fetch it accordingly.
Joined: 17-Jul-2007
First, let me say how much I appreciate your responses. My primary objective here is to understand the LLBL model, not to describe my own, but it is certainly helpful to achieve this by comparing what I know with your model.
In your response you've correctly understood some things and not other about my implementation. This is of course my fault as I have not provided a complete description. For your information I am attaching a more complete description which I think will clarify some things. I am quite happy to hear your thoughts on these but my real objective is to understand LLBL so let me move on to some additional questions.
I think I understand your table structure fairly well now. So let me start with my original question now that I understand things a little better. How do you implement what I refer to as a purely abstract level? I thinkI have two options. Create a database table that normalizes the properties for that level or create a view and map that view to a new entity type which for LLBL would not be in any way associated with the hierachy.
Is there any notion of querying against a hierachy type, like People, and getting back a derived type like Employee? If so I assume you must implement additional database access after getting the People record to get the Employee extensions. Or would you just return entities of type People?
In conjunction with the above question, how are associations handled where the definition of the association is at a level like People and a specific instance of the association is against a derived level like Employee? If the association is defined between two people (Spouse) can I take an instance of type Employee and seamlessly follow the assocation to another instance which may be of type Employee? Would the navigation return something of type Employee or something of type People? That is, is the return type based on how the association is defined or based on the type of the actual instance involved?
Does LLBLGen Pro support multiple, simultaneous database connections? Access seems to follow a static access model so I'm not seeing how this would be possible.
Likewise how does this static model handle multi-threaded operations? We would be using this within Web Services where multiple calls may be operating simultaneously and each thread needs to have its own object space and transaction context. Is this possible?
Filename | File size | Added on | Approval |
---|---|---|---|
ModelDescription.doc | 27,136 | 25-Jul-2007 20:20.00 | Approved |
Joined: 17-Aug-2003
JamesAtHoneywell wrote:
First, let me say how much I appreciate your responses. My primary objective here is to understand the LLBL model, not to describe my own, but it is certainly helpful to achieve this by comparing what I know with your model.
In your response you've correctly understood some things and not other about my implementation. This is of course my fault as I have not provided a complete description. For your information I am attaching a more complete description which I think will clarify some things. I am quite happy to hear your thoughts on these but my real objective is to understand LLBL so let me move on to some additional questions.
Ok. I've read your .doc file and if you have a single table with a discriminator column for 'type' and map person, employee and manager on that table, where person is marked abstract and employee + manager are not, you get what you want. One can't instantiate a PersonEntity as it's abstract, so the developer has to instantiate EmployeeEntity or ManagerEntity. Fetches are done on a single table so no joins. Type filters are supported, associations (relations) and you don't have to write the views, as that's been taken care of for you. (fetch a collection of employees and you get all Employee entities in EmployeeEntity instances and all managers in ManagerEntity instances.) The table has to have all fields of manager + employee and fields not in employee should be nullable. You could make person have a couple of fields and map the rest in employee and manager.
The only side effect is that 'Person' is mapped onto a database element, as it's an entity in the system, albeit abstract.
I think I understand your table structure fairly well now. So let me start with my original question now that I understand things a little better. How do you implement what I refer to as a purely abstract level? I thinkI have two options. Create a database table that normalizes the properties for that level or create a view and map that view to a new entity type which for LLBL would not be in any way associated with the hierachy.
A pure abstract level as in 'this entity class represents an entity that's not in the db' isn't supported. The LLBLGen Pro idea is that an entity in the system is formulated (indirectly) in a relational model element.
Is there any notion of querying against a hierachy type, like People, and getting back a derived type like Employee? If so I assume you must implement additional database access after getting the People record to get the Employee extensions. Or would you just return entities of type People?
If you query for type T and T is a supertype in a hierarchy, you'll get all subtypes. But, T has to be a type which is mapped onto a db element. So as I described earlier, to query on all types which implement IPerson for example, that's not possible.
In conjunction with the above question, how are associations handled where the definition of the association is at a level like People and a specific instance of the association is against a derived level like Employee? If the association is defined between two people (Spouse) can I take an instance of type Employee and seamlessly follow the assocation to another instance which may be of type Employee? Would the navigation return something of type Employee or something of type People? That is, is the return type based on how the association is defined or based on the type of the actual instance involved?
If the relation is between Person and itself, and an instance E of a given Employee has a relation with another instance of Employee, namely E2, and I follow the Spouse field (which is mapped onto the relation between person and itself) then I get E2 back, thus a full EmployeeEntity object. The TYPE of the entity returned by the relation is Person (as the relation is between Person and itself ) , so a cast is necessary but the entity is in its own type, so not stored in a Person object.
LLBLGen Pro does allow you to specify interfaces for entities which are then generated into the class header (though you should provide the implementation with a partial class) and which you can use to work with entity instances. The one thing that can't be done is using these interfaces for queries.
Does LLBLGen Pro support multiple, simultaneous database connections? Access seems to follow a static access model so I'm not seeing how this would be possible.
Of course it does You can create as much DataAccessAdapter instances as you want and create connections with them. In SelfServicing, things are taken care of for you, though there too you can have multiple connections to a db, e.g. one per thread.
So I have the feeling you're referring to something else. Could you elaborate a bit on that?
Likewise how does this static model handle multi-threaded operations? We would be using this within Web Services where multiple calls may be operating simultaneously and each thread needs to have its own object space and transaction context. Is this possible?
Yes. As long as you're not sharing objects among threads, you're perfectly safe.
Joined: 17-Jul-2007
On the question of "Multiple Database Connections", perhaps I should have said "Multiple Databases". In the documentation I'm seeing examples like
CustomerEntity customer = new CustomerEntity(); customer.FetchUsingPK("CHOPS");
It was not clear to me, in a system with multiple databases how this code would know which database to fetch customer from.
One additional question I forgot to ask. We support multiple database vendors. Specifically Sql Server, Oracle and (perhaps) DB2. Lets say we set up a Sql Server database, run LLBL Gen against it, tweek the model, generate the runtime and write application code against this. THen we want to deploy these applications against an Oracle Database. How much of this Sql Server effort is usable? Clearly the application code written agains the Sql Server runtime would be the most important part to re-use. If you could walk through the steps we would take to accomplish this it would probably be more useful than me asking dumb questions.
Joined: 17-Aug-2003
JamesAtHoneywell wrote:
On the question of "Multiple Database Connections", perhaps I should have said "Multiple Databases". In the documentation I'm seeing examples like
CustomerEntity customer = new CustomerEntity(); customer.FetchUsingPK("CHOPS");
It was not clear to me, in a system with multiple databases how this code would know which database to fetch customer from.
That's selfservicing code. Please check the Concepts -> Templates and template groups for a list of differences between adapter and selfservicing and when to pick which. If you want multiple connections to multiple different databases, e.g. you fetch from oracle and you want to save into sqlserver, you have to use adapter. If you want to use multiple catalogs in the same project, you can do that without a problem with selfservicing as well.
The persistence information contains the catalog in which the entity is located. This info is located in the code. Adapter has a facility to let you modify this info for schemas and catalogs per call. So you can fetch an entity using adapter A from catalog C1 and save it in catalog C2. This is called catalog name overwriting.
Adapter also offers the ability to use an entity object without ties to a database system. So you can fetch an entity from Oracle using the dataaccessadapter instance belonging to the Oracle project and save it into sqlserver using the dataaccessadapter instance belonging to the SqlServer project.
One additional question I forgot to ask. We support multiple database vendors. Specifically Sql Server, Oracle and (perhaps) DB2. Lets say we set up a Sql Server database, run LLBL Gen against it, tweek the model, generate the runtime and write application code against this. THen we want to deploy these applications against an Oracle Database. How much of this Sql Server effort is usable? Clearly the application code written agains the Sql Server runtime would be the most important part to re-use. If you could walk through the steps we would take to accomplish this it would probably be more useful than me asking dumb questions.
You should be able to use all the code directly on Oracle. The steps to follow are generally the following. - create a sqlserver project - generate Adapter code - write your code with that generated code. Use IDataAccessAdapter instead of DataAccessAdapter.
- use our project converter toolkit (available in the SDK (C# sourcecode)) which converts an LLBLGen Pro project file from one DB type to another. You can use this tool to convert your sqlserver .lgp file to an Oracle one.
- Load the Oracle version of the project into llblgen pro.
- Generate DDL SQL for oracle using the DDL SQL templates These templates are available to customers for free of course, in the customer area
- Run the DDL SQL on your Oracle Box. Eventually tweak it to meet your needs. Some types arent supported on oracle, like bit / bool fields so you have to choose different types for these, e.g. NUMBER(1,0).
-
Refresh the oracle project on the new schema. If you need bit/boolean fields, you can use the type converter shipped with llblgen pro to convert numeric fields into booleans and back. If you need others, like Guid types which are for example mapped on char(30) fields, you can write a simple typeconverter yourself, using the SDK sourcecode (it's a couple of lines)
-
also generate Adapter code.
You now have 2 times the dbgeneric vs.net project and per database a dbspecific vs.net project. Throw away one of the dbgeneric vs.net project.
- Create a simple factory which returns the right DataAccessAdapter instance based on the db you want, so either the DataAccessAdapter instance from the SqlServer 's DbSpecific vs.net project, or the one from the oracle's DbSpecific vs.net project.
Now when you want to fetch code, you simply do: using(IDataAccessAdapter adapter = MyAdapterFactory(myDbID)) { adapter.FetchEntityCollection(customers, null); }
for example.
where MyAdapterFactory is the simple class which selects which adapter to use.
The key is to keep the entities in sync on both db's, so a customer entity has to have the same fields on sqlserver and oracle, and hte fields have to be in the same order. If .NET types differ, either alter the db types to make the .NET types match or add a type converter to convert the db type to a .net type transparently.
Typeconverters are LLBLGen Pro's way to convert db types to .net types if the .NET type has to be different from the default .net type the db type is converted into and vice versa. This is done transparently so you simply work with the .net type the entity field has and how it is converted to the db type is done under the hood.
Joined: 17-Jul-2007
Again, thanks for the response.
This all sounds quite reasonable and sounds like it would meet our needs quite nicely. I will just need to dig in and validate my understanding a bit more and also do some performance testing of the database architecture with the kinds of models and data we expect.
In all, the product seems quite mature and well thought out.
The only general remark I would make applies to virtually everything I have seen on the market. Why does everyone want me to build my database architecture to fit their object modeling idiosyncrasies and work backwards to get the entity model? Why not let me build and maintain the entity model and let everything flow out from that model using configurable generation patterns? Create the database schema as your implementation requires and let my designers and developers concentrate on what is important to them, the conceptual entity model?
This approach is a requirement for us as we want a flexible model that will work across multiple database vendors, product lines and get customized to various degrees across many different sites. As a result we will need to maintain our own entity modeling tool to drive the database generation process so that LLBL Gen can "reverse engineer" the entity model.
Sorry to end on a negative note. But as I say, no one else seems to do this either so I guess I am just being un-realistic.
Joined: 17-Aug-2003
JamesAtHoneywell wrote:
Again, thanks for the response.
This all sounds quite reasonable and sounds like it would meet our needs quite nicely. I will just need to dig in and validate my understanding a bit more and also do some performance testing of the database architecture with the kinds of models and data we expect.
In all, the product seems quite mature and well thought out.
Thank you!
The only general remark I would make applies to virtually everything I have seen on the market. Why does everyone want me to build my database architecture to fit their object modeling idiosyncrasies and work backwards to get the entity model? Why not let me build and maintain the entity model and let everything flow out from that model using configurable generation patterns? Create the database schema as your implementation requires and let my designers and developers concentrate on what is important to them, the conceptual entity model?
That approach in theory sounds very solid and in theory I definitely agree with you. There's only one problem: the database is an expensive resource (not in $$$ but in performance ) which isn't infinitely fast, on the contrary. It's likely that your application spends 20-30% of its time inside the DB. So it becomes a deciding factor in the process of what decides the performance of the application?
So ignoring that the db is there or ignoring that the db has limitations and thus offers you no choice but to take them into account will likely result in a slow performing application. Perhaps not directly on the developer's machine with a few records. Though after a while in production with millions of rows, it will.
Please read my essay which was also part of Jimmy Nilsson's ADDDP book: http://weblogs.asp.net/fbouma/archive/2006/08/....ain-Model.aspx Don't let the title fool you
This approach is a requirement for us as we want a flexible model that will work across multiple database vendors, product lines and get customized to various degrees across many different sites. As a result we will need to maintain our own entity modeling tool to drive the database generation process so that LLBL Gen can "reverse engineer" the entity model.
Sorry to end on a negative note. But as I say, no one else seems to do this either so I guess I am just being un-realistic.
No you're not unrealistic at all, it's very logical to follow that approach at first as it is the most logical one to follow when your starting point is a class model and you just want to persist the data of a running version of that somewhere.
As there's a mismatch between the OO model in class format and the relational model in table form, the conversion therefore brings a set of limits to the party which shouldn't be ignored: the db is part of the system. This limits the freedom of the class model to some extend: you can't design deep hierarchies and all map them to tables and expect very fast performance even though it should be possible in theory.