Architecture Design Pattern?

Posts   
 
    
Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 17-Apr-2005 19:05:46   

What is the optimal design pattern to be used with LLBLGen generated code?

Before LLBLGen, I've used a service based collaboration pattern. This is just how I was taught to do it... I didn't even know that was what it was called until I read this article: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpatterns/html/DesDTO.asp.

It appears that LLBLGen code is more suited to an instance based collaboration pattern. All of a sudden I feel like a duck out of the water!

I'm developing an n-tier application using .Net Remoting, and thus the DataAccessAdapter code base. Can anyone give me solid advice on the appropriate architecture design pattern to use with LLBLGen? Code example / snipppet?

Thanks, Darwin

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 18-Apr-2005 10:03:42   

Darwin wrote:

What is the optimal design pattern to be used with LLBLGen generated code?

Before LLBLGen, I've used a service based collaboration pattern. This is just how I was taught to do it... I didn't even know that was what it was called until I read this article: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpatterns/html/DesDTO.asp.

The link seems to lead to a page which isn't there.

It appears that LLBLGen code is more suited to an instance based collaboration pattern. All of a sudden I feel like a duck out of the water!

I'm developing an n-tier application using .Net Remoting, and thus the DataAccessAdapter code base. Can anyone give me solid advice on the appropriate architecture design pattern to use with LLBLGen? Code example / snipppet?

In the situation you're in, the main question is: which approach do you choose: 1) a service with methods like: SaveCustomer(customerId, companyname, contactfirstname etc...) or 2) a service with methods like: SaveCustomer(customerObject);

I'd opt for 2) as it is the easiest to develop. Share the database generic project on both client and server. The database specific project is only used on the server. the client works with entity objects, retrieved from the server. The server exposes methods which work with entity objects but also hide filtering and prefetch path usage, so the client can't cook up their own filtering, they have to rely on the server for that.

You could opt for 1 service with all the methods, or different services for each feature group, like customermanager service, order manager service etc.

The second option leads to a problem that's one of the core issues with SOA: what if you want to use 2 services to get your data? I.e.: Get a customer with a filter on orders, or get customers and get orders but for both you define different criteria and the 2 resultsets have to be merged.

For the client, this is not interesting, as the client just wants to get the data and save the data, so the service should offer a single interface to the client, or at least an interface which doesn't require client-side data processing because the server is divided into several services.

Frans Bouma | Lead developer LLBLGen Pro
Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 18-Apr-2005 18:48:42   

1) a service with methods like: SaveCustomer(customerId, companyname, contactfirstname etc...) or 2) a service with methods like: SaveCustomer(customerObject);

I was headed in the direction of answer #2 also. My issue came up when I extended the service to include RefreshCustomer(customerObject) or to merge the results of SaveCustomer(customerObject) back into the exiting customerObject on the client side.

The second option leads to a problem that's one of the core issues with SOA: what if you want to use 2 services to get your data? I.e.: Get a customer with a filter on orders, or get customers and get orders but for both you define different criteria and the 2 resultsets have to be merged.

I'm not sure I follow on this one. Is it the same as my issue above (merging objects?) or are there issues beyond that which I have yet to see?

I've been reading a LOT in the Architecture forum this weekend and from what I could digest I am thinking that the best route for me would be to use ADO.Net datasets as a data transfer objects, completely hiding the LLBLGen object model from the client. These give me merge capabilities in addition to seperating the functions of the BL and the PL completely.

Your thoughts?

Thanks, Darwin

Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 18-Apr-2005 18:55:55   
Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 18-Apr-2005 19:38:12   

ADO.Net datasets would also give me some built in validation such as foriegn key constraints, allow / don't allow null, string length, and a place to hook errors into the model. Does LLBLGen have a way that I can do that? It may have.... I only started working with it a week ago!

In other words, is there a way that I can tell if a property is required, or is a foriegn key (and thus must exist elsewhere), or what it's length is if it's a string property? Where would I hook in errors relative to a specific property?

I hope you don't mind all the questions. I probably shouldn't be contemplating switching to a new mechanism for the DAL on a live project... but the advantages that LLBLGen has to offer are just too great to not give it a go!

Thanks, Darwin

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 19-Apr-2005 10:59:31   

Darwin wrote:

1) a service with methods like: SaveCustomer(customerId, companyname, contactfirstname etc...) or 2) a service with methods like: SaveCustomer(customerObject);

I was headed in the direction of answer #2 also. My issue came up when I extended the service to include RefreshCustomer(customerObject) or to merge the results of SaveCustomer(customerObject) back into the exiting customerObject on the client side.

Remoting effectively creates new instances, that's the nature of remoting: data is passed between the client/server and at the side it arrives it's stored in a new instance.

If you don't fetch related entities, you can move data from one instance to the other like this:

myOldCustomerInstance.Fields = myNewInstance.Fields; now myOldCustomerInstance has the same fields object as myNewInstance.

Though I wouldn't create my application like that, as it gives problems when you try to fetch graphs from the service.

Keep in mind that the client works disconnected, i.e.: gets the entities by value. This means that you should use the service as a host for actions which you do in a kind of batched way:

  • connect to service, grab data for client side process
  • perform process on client (can be anything, show form, do things)
  • collect data for data propagation to service
  • connect to service, send data to service for persistence.

You can see this very globally, so the processing on teh client can be very thin, but the idea is the same: avoid chatty service usage: connect, get data, disconnect, do your thing, connect, send data, disconnect, end.

The second option leads to a problem that's one of the core issues with SOA: what if you want to use 2 services to get your data? I.e.: Get a customer with a filter on orders, or get customers and get orders but for both you define different criteria and the 2 resultsets have to be merged.

I'm not sure I follow on this one. Is it the same as my issue above (merging objects?) or are there issues beyond that which I have yet to see?

You want to merge objects, so you want to keep object instances alive for a long time in your client?

I've been reading a LOT in the Architecture forum this weekend and from what I could digest I am thinking that the best route for me would be to use ADO.Net datasets as a data transfer objects, completely hiding the LLBLGen object model from the client. These give me merge capabilities in addition to seperating the functions of the BL and the PL completely.

Your thoughts?

You'll then do a lot of extra work that's IMHO unnecessary. But then again, I always try to get clients as stateless as possible, so I don't keep objects around for a long time. The main issue with keeping objects around is that the longer you hold on to them, the more stale their contents will become. This doesn't have to be a problem per se (list of countries for example) though if it's any other data, it's not that recommended, as it can influence decisions made on the client while the real data in the database is actually changed and which thus means that the decision made on the client are false.

Frans Bouma | Lead developer LLBLGen Pro
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 19-Apr-2005 11:09:53   

Darwin wrote:

ADO.Net datasets would also give me some built in validation such as foriegn key constraints, allow / don't allow null, string length, and a place to hook errors into the model. Does LLBLGen have a way that I can do that? It may have.... I only started working with it a week ago!

Foreign key constraint checking in datasets is only valid for the data in the dataset, and IMHO shouldn't be used as a reliable source. (i.e.: an FK in a DS can fail, while the data IS available in the DB, and an FK can succeed in the DS while the data is NOT available in the database.)

LLBLGen Pro has a fine grained validation mechanism: per entity or per field. Please see the Validation topic in the documentation (Using the generated code). Adapter doesn't have nullable flags in entity fields, selfservicing does, as selfservicing has the persistence information in the entity fields.

Due to alot of requests, it will be added to the 1.0.2004.2 release as a last fix later today.

In other words, is there a way that I can tell if a property is required, or is a foriegn key (and thus must exist elsewhere), or what it's length is if it's a string property? Where would I hook in errors relative to a specific property?

Validation errors should be thrown as exceptions. Databinding specific error information isn't implemented, as it is too restrictive, it's just databinding, exceptions always work.

Field validators can simply return true/false and the value then isn't set. You can also opt for an exception of course.

In 1.0.2004.2, methods are added to intercept field validation and to intercept the results. 1.0.2004.2 is now in beta and is released later this week.

The length of a string property is available as entity.Fields[index].MaxLength. LLBLGen Pro automatically performs validation for this.

I hope you don't mind all the questions. I probably shouldn't be contemplating switching to a new mechanism for the DAL on a live project... but the advantages that LLBLGen has to offer are just too great to not give it a go!

simple_smile I don't mind the questions, please ask as it's better to know all the options beforehand than to regret an uninformed decision later on simple_smile

Frans Bouma | Lead developer LLBLGen Pro
Rogelio
User
Posts: 221
Joined: 29-Mar-2005
# Posted on: 19-Apr-2005 14:57:26   

Otis wrote:

LLBLGen Pro has a fine grained validation mechanism: per entity or per field. Please see the Validation topic in the documentation (Using the generated code). Adapter doesn't have nullable flags in entity fields, selfservicing does, as selfservicing has the persistence information in the entity fields.

Due to alot of requests, it will be added to the 1.0.2004.2 release as a last fix later today.

Frans, what do you want to say with "Adapter doesn't have nullable flags in entity fields", that the actual Adapter's entity.fields(index).IsNull is not working? or that the Adapter does not have a flag to indicate if the field allow null in the database?

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 19-Apr-2005 15:25:49   

Rogelio wrote:

Otis wrote:

LLBLGen Pro has a fine grained validation mechanism: per entity or per field. Please see the Validation topic in the documentation (Using the generated code). Adapter doesn't have nullable flags in entity fields, selfservicing does, as selfservicing has the persistence information in the entity fields.

Due to alot of requests, it will be added to the 1.0.2004.2 release as a last fix later today.

Frans, what do you want to say with "Adapter doesn't have nullable flags in entity fields", that the actual Adapter's entity.fields(index).IsNull is not working? or that the Adapter does not have a flag to indicate if the field allow null in the database?

The latter, that the adapter fields currently don't have information about nullability in the database (so there is no info if the field in the database accepts nulls or not). I've now added this, will be available in the release candidate 1 which will be released later today.

I saw that it was still a painpoint, i.e.: people want to write client-side checks, do gui oriented things etc. and it could be a real problem in distributed environments where no adapter is available so you can't request if a field is nullable. So in 1.0.2004.2, this information is available through IEntityField(2).IsNullable.

In selfservicing this was already available, in selfservicing therefore IsNullable returns IEntityField.SourceColumnIsNullable.

Frans Bouma | Lead developer LLBLGen Pro
Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 21-Apr-2005 01:12:10   

Remoting effectively creates new instances, that's the nature of remoting: data is passed between the client/server and at the side it arrives it's stored in a new instance.

So if I call Save() on the PL, and the BL tier returns the saved object (re-fetches) I would need to destroy the original object that I saved and replace it with the new object that was returned? I can't do that from within the object itself, can I? If I can't, won't I need to make 2 calls to the BL? one for Object.Save() and another for a ObjectManager.Get()? Or just move the entire Save() outside of the object, that doesn't seem very self contained though.

If I destroy the original instance of the object and replace it with the new instance won't all of my databinding intialization code have to re-fire? (in WinForms)

When I was using ADO.Net I would merge the return dataset into the one I had already databound. That's where I'm getting the word "merge" from. I don't know that it is the best word to describe what I'm talking about

I am thinking that the best route for me would be to use ADO.Net datasets as a data transfer objects, completely hiding the LLBLGen object model from the client. These give me merge capabilities in addition to seperating the functions of the BL and the PL completely.

You'll then do a lot of extra work that's IMHO unnecessary. But then again, I always try to get clients as stateless as possible, so I don't keep objects around for a long time. The main issue with keeping objects around is that the longer you hold on to them, the more stale their contents will become.

That's why I need a Refresh() as well... I have to hold the object as long as the WinForm for it is open, don't I?

Thanks for all your help, and your patience.

Darwin

Darwin avatar
Darwin
User
Posts: 38
Joined: 12-Apr-2005
# Posted on: 21-Apr-2005 20:19:17   

Is there somehing very basic missing in my thinking? Maybe something about the OO approach? or about n-tier architecture? I've been solving this puzzle in the same way for so long that I don't seem to be getting it. I'd appreciate feedback from anyone that might help. What you think of as "very basic", I may think of as "revolutionary".

Thanks, Darwin

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 22-Apr-2005 10:01:00   

Darwin wrote:

Remoting effectively creates new instances, that's the nature of remoting: data is passed between the client/server and at the side it arrives it's stored in a new instance.

So if I call Save() on the PL, and the BL tier returns the saved object (re-fetches) I would need to destroy the original object that I saved and replace it with the new object that was returned?

In a remoting scenario, yes. (btw, don't use selfservicing with remoting. ) But it's natural, as the logical way of events will be: - collect information - call service, pass changed object to save - if it's needed again, call service and refetch the data, which will be done in the way of var = (type)service.Get...(id);

I can't do that from within the object itself, can I? If I can't, won't I need to make 2 calls to the BL? one for Object.Save() and another for a ObjectManager.Get()? Or just move the entire Save() outside of the object, that doesn't seem very self contained though.

You always have to. The data is fetched into the object on the server side. To get the data accross to the client, you always have to call the service to get that data.

If I destroy the original instance of the object and replace it with the new instance won't all of my databinding intialization code have to re-fire? (in WinForms)

When I was using ADO.Net I would merge the return dataset into the one I had already databound. That's where I'm getting the word "merge" from. I don't know that it is the best word to describe what I'm talking about

Merge will also reflect the rows as changed if I'm not mistaken, and Merge is IMHO more used to add rows from one ds into another, or not?

I am thinking that the best route for me would be to use ADO.Net datasets as a data transfer objects, completely hiding the LLBLGen object model from the client. These give me merge capabilities in addition to seperating the functions of the BL and the PL completely.

You'll then do a lot of extra work that's IMHO unnecessary. But then again, I always try to get clients as stateless as possible, so I don't keep objects around for a long time. The main issue with keeping objects around is that the longer you hold on to them, the more stale their contents will become.

That's why I need a Refresh() as well... I have to hold the object as long as the WinForm for it is open, don't I?

be careful. A remoted client is by definition a holder of stale data. The longer you keep an object around, the more stale the data gets. A remoted client therefore should be designed as such that it doesn't 'chat' a lot with lower tiers, as the more 'chatter' is implemented, the less scalable your application will become (imagine 1000 clients hammering the server every 100ms)

So working with data remotely is basicly: - do work on the client for a given piece of functionality, till it's done - when it's done, it's time to save the changes, do that in one batch - functionality is over if save succeeded. If not, user makes changes again and retries (objects are still new/changed). If the action is succesful, close functionality's screens and let the user do something else.

Frans Bouma | Lead developer LLBLGen Pro
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39750
Joined: 17-Aug-2003
# Posted on: 22-Apr-2005 10:07:20   

Darwin wrote:

Is there somehing very basic missing in my thinking? Maybe something about the OO approach? or about n-tier architecture? I've been solving this puzzle in the same way for so long that I don't seem to be getting it. I'd appreciate feedback from anyone that might help. What you think of as "very basic", I may think of as "revolutionary".

You're not missing anything, it's just a common mismatch between expectations and reality. You probably 'expect' to be able to write a normal winforms client which uses remoting as if it uses a locally BL and DAL, but that's not the case, as a remoted BL service comes with a network connection as the primary communication channel, and that's a serious bottleneck. It would be great if the network could be seen as a transparent layer, and it all just works as if the BL tier was local, but it would be very unwise to ignore the limitations of using a remoted service.

Frans Bouma | Lead developer LLBLGen Pro