- Home
- LLBLGen Pro
- Architecture
Adapter?
Joined: 22-Mar-2006
What is the particular purpose of the adapter scenario? I understand that it's "better" for services, but why? The documentation only seems to state an opinion (opinion == fact without support).
I found this thread, but I still don't understand. http://www.llblgen.com/TinyForum/Messages.aspx?ThreadID=3787&HighLight=1
What I'm reallying trying to get at is why there are two. I try to keep things as unified as possible, so, for example, ALL of my services are WCF now as I have endpoints for TCP, IPC, WS, etc on each of them... and I want that same logic to be used at times as a DLL, which I like self-servicing for. (for the same library as a service, I write a simple facade, which doubles as my service). But this adapter/self serving seems to split my unification so that I should be using one for one purpose and another for something else. Sure I can use the SS for the internals and adapter for the facade, but the LLBLGen interface doesn't do both at once and I'm FAR too lazy to generate twice and add twice and repeat that twice every time a change is made (which is often).
Any help would be appreciated.
Joined: 17-Aug-2003
Adapter is essential in distributed applications. Consider this: Service fetches CustomerEntity, sends it to client. Client now can access CustomerEntity.Orders, but it won't trigger lazy loading. Client can't call Save() because it's not there, nor delete. These things aren't there because it doesnt make sense on the client: the client can't access the DB, the Service does.
Adapter is also essential in projects where a group of developers within a team isn't allowed to call persistence logic, for example the GUI guys: they always have to call the BL tier / service. With selfservicing, they have indirect db access through lazy loading but more importantly they can make shortcuts with Save() and Delete(). With adapter, this isn't possible as the GUI guys have no reference to the DB specific project.
Adapter is also essential when you want to access multiple database types in one system. So say you want to load/save in oracle and sqlserver. You can with adapter, as the persistence mappings are applied by the dbspecific project of choice, the entities themselves are persistence logic/info free.
If you don't need all that, it's not that important what you pick. Though it's important to know these things when you have to make a choice.
Joined: 17-Aug-2003
Correct. It requires that the targets have the same structure of course. With type converters you can for example make a bit field in sql be mapped on a NUMBER(1,0) field in oracle so you can use the same db generic project to load entities from sql and save them in oracle and vice versa.
Joined: 22-Mar-2006
Otis wrote:
Correct. It requires that the targets have the same structure of course. With type converters you can for example make a bit field in sql be mapped on a NUMBER(1,0) field in oracle so you can use the same db generic project to load entities from sql and save them in oracle and vice versa.
Hmm, I just tried a few things and them gasp read the docs and I guess you can't really send the strongly typed objects to the client even by using the adapter (without making the client change stuff-- they won't)?
If that doesn't work, I would kindly suggest that as a future feature.
Joined: 17-Aug-2003
quantum00 wrote:
Otis wrote:
Correct. It requires that the targets have the same structure of course. With type converters you can for example make a bit field in sql be mapped on a NUMBER(1,0) field in oracle so you can use the same db generic project to load entities from sql and save them in oracle and vice versa.
Hmm, I just tried a few things and them gasp read the docs and I guess you can't really send the strongly typed objects to the client even by using the adapter (without making the client change stuff-- they won't)?
If that doesn't work, I would kindly suggest that as a future feature.
I think you have to elaborate on that abit, because you can send adapter entities to the client and back without a problem... so could you give an example ?
Joined: 22-Mar-2006
Otis wrote:
quantum00 wrote:
Otis wrote:
Correct. It requires that the targets have the same structure of course. With type converters you can for example make a bit field in sql be mapped on a NUMBER(1,0) field in oracle so you can use the same db generic project to load entities from sql and save them in oracle and vice versa.
Hmm, I just tried a few things and them gasp read the docs and I guess you can't really send the strongly typed objects to the client even by using the adapter (without making the client change stuff-- they won't)?
If that doesn't work, I would kindly suggest that as a future feature.
I think you have to elaborate on that abit, because you can send adapter entities to the client and back without a problem... so could you give an example ?
I mean the classes don't get sent to the client...
Per the docs "The client only has a reference to the database generic project, as it uses the service for database specific activity, namely the persistence logic to work with the actual data. Because both client and service have references to the database generic project, they both can use the same types for the entities, in this case the CustomerEntity. "
then there' is the ".NET 2.0 specific: Schema importers" section...
When I design APIs for use over a webservice, one of my own company standards is that the client never need anything but an endpoint and they be given all objects there. In LLBLGen, you need to give them a project as well?
I may be mistaken, but that is what I'm seeing... I'm actually rather confused.
Joined: 17-Aug-2003
The thing is that the data over the wire is just XML, so if you want to put that xml into live objects of a given type again, the objects have to be of a known type on the client. If that's not the case, the types get generated by the proxy stubber of wsdl or whatever WCF comes with.
It's perfectly fine to transfer DTO's to the client, use these there, and send them back and on the server process the DTO's back into entities (as dto's don't have change tracking etc.). Often though, services are standalone pieces of functionality which operate on a high-level in your application stack. This thus doesn't really make the client rely on the service's internal way of how entities are represented, as the client just works with a high-level API of the service and passes in messages with method data perhaps but not entity graphs nor does the client request entity graphs.
So if you don't want to use entity classes on the client, that's fine. The thing is though that you then need to pass DTO's to the client and receive them back. Because no entities are transfered, you could use selfservicing inside the service, because the entities themselves arent transfered across the wire.
In the case of when you want to use entity objects on the client, the service likely operates on a lower level in your application stack and the client will request graphs and will send graphs, and to ease development of the service, you then want to utilize the change tracking facilities in the entities so you don't want to have to convert DTO's to entities and vice versa.
This thus means what you want to do on the client side and at what level the services operate in your application. IMHO it's not advisable to use webservices on a low level e.g. tiers; It's more advisable to use services as standalone applications which perform a fixed set of functionalities. These services then accept messages with the data for the command to execute, and no entities go back and forth.
The webservices + schemaimporters stuff, that's for the people who want to utilize a webservice at a very low level in their application, thus for example a 'data service'. This is often slow and resource intensive as a lot of conversion to/from XML is going on which is unnecessary if the service would have been more smarter. (and directly contacting the db over the network is often faster).
The MS code requires IXmlSerializable, at least did require that, so wsdl.exe (also used by vs.net) could generate the proper stub classes. As without proper schema conversion, all types are considered DataSets (hardcoded inside wsdl.exe.. ), this very complicated route was apparently required.
With WCF it's easier, if you use an interface for example and at both sides the same generated code assembly. Though as you said, if you don't want to have the entity code on the client, you NEED to use dto's anyway.
Joined: 22-Mar-2006
Otis wrote:
The thing is that the data over the wire is just XML, so if you want to put that xml into live objects of a given type again, the objects have to be of a known type on the client. If that's not the case, the types get generated by the proxy stubber of wsdl or whatever WCF comes with.
It's perfectly fine to transfer DTO's to the client, use these there, and send them back and on the server process the DTO's back into entities (as dto's don't have change tracking etc.). Often though, services are standalone pieces of functionality which operate on a high-level in your application stack. This thus doesn't really make the client rely on the service's internal way of how entities are represented, as the client just works with a high-level API of the service and passes in messages with method data perhaps but not entity graphs nor does the client request entity graphs.
So if you don't want to use entity classes on the client, that's fine. The thing is though that you then need to pass DTO's to the client and receive them back. Because no entities are transfered, you could use selfservicing inside the service, because the entities themselves arent transfered across the wire.
In the case of when you want to use entity objects on the client, the service likely operates on a lower level in your application stack and the client will request graphs and will send graphs, and to ease development of the service, you then want to utilize the change tracking facilities in the entities so you don't want to have to convert DTO's to entities and vice versa.
This thus means what you want to do on the client side and at what level the services operate in your application. IMHO it's not advisable to use webservices on a low level e.g. tiers; It's more advisable to use services as standalone applications which perform a fixed set of functionalities. These services then accept messages with the data for the command to execute, and no entities go back and forth.
The webservices + schemaimporters stuff, that's for the people who want to utilize a webservice at a very low level in their application, thus for example a 'data service'. This is often slow and resource intensive as a lot of conversion to/from XML is going on which is unnecessary if the service would have been more smarter. (and directly contacting the db over the network is often faster).
The MS code requires IXmlSerializable, at least did require that, so wsdl.exe (also used by vs.net) could generate the proper stub classes. As without proper schema conversion, all types are considered DataSets (hardcoded inside wsdl.exe.. ), this very complicated route was apparently required.
With WCF it's easier, if you use an interface for example and at both sides the same generated code assembly. Though as you said, if you don't want to have the entity code on the client, you NEED to use dto's anyway.
We never seem to communicate correctly I'm saying the exact same thing you are... haha...
My goal here was to see if I can achieve what I wanted end using the adapter model. I was wondering if the Adapter model would allow me to send and receive DTOs (and create them), but I guess that's a completely different thing all together, which I'll definitely have to look at. Thanks for clearing that up and for sending down the right road.
Joined: 22-Mar-2006
quantum00 wrote:
Thanks for clearing that up and for sending down the right road.
By that I mean this: http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=9040
These two topics are basically the same. I was asking two sides of the same coin.