- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Recommendations for LLBLGen Pro
Joined: 05-Oct-2004
Hi there at the forum,
I need to convince my new employer of using LLBLGen as the base for new developments and as the best way to migrate existing applications. The current situation: - SQL Server backend - Frontend is Access based with tons of SP - users have no PCs but network-stations that are based on Citrix (Terminal Client server)
The new architecture should be .NET based on remoting. The current ideas for future development are based on the following:
- binary remoting
- MS Datasets as Data objects exchanged between the layers
- Different facades for the Business Layer and the Data Access
I feel unhappy about this and I want to argue against this direction, especially concerning using the MS Data sets.
Any ideas will be greatly appreciated.
Greetings
eugene wrote:
I need to convince my new employer of using LLBLGen as the base for new developments and as the best way to migrate existing applications. The current situation: - SQL Server backend - Frontend is Access based with tons of SP - users have no PCs but network-stations that are based on Citrix (Terminal Client server)
The new architecture should be .NET based on remoting. The current ideas for future development are based on the following:
- binary remoting
- MS Datasets as Data objects exchanged between the layers
- Different facades for the Business Layer and the Data Access
I feel unhappy about this and I want to argue against this direction, especially concerning using the MS Data sets.
That scenario can be implemented perfectly using Adapter: you send the entity objects over the wire to the client, it fills the entities there or uses the data further, and sends the entities back to the server, which persists the entities using the DataAccessAdapter class. So you don't have any persistent logic on the client, you work disconnected, and still have the object oriented approach With the prefetch paths functionality build into the generated code, you can prefetch a complete graph on the server and send it through remoting to the client
Joined: 05-Oct-2004
Hi Otis,
thank you for your remarks. I need further information with regard to the following issues:
- what are the advantages of LLBLGEN Pro objects as opposed to the MS Datasets (I know they are better I just need specific arguments )
- Here they have a number of bad experiences with different applications under Citrix and Terminal Server. Are you aware of any problems that might occur with LLBLGEN Pro based clients (Winforms) under Citrix or Terminal Server?
Greetings
eugene wrote:
thank you for your remarks. I need further information with regard to the following issues: - what are the advantages of LLBLGEN Pro objects as opposed to the MS Datasets (I know they are better
I just need specific arguments )
Datasets have a big disadvantage in that they are always a collection of rows. This means that if you want to work on a Customer object with inside that customer the order objects of that customer, you can't do that with a dataset, you have to instantiate a whole dataset, inside that a datatable, inside that a single row with the customer and also a separate table with the orders, create a relation object to link them together.
That's not the only thing: with an O/R mapper/generator like LLBLGen Pro you get a little gem called the dynamic query engine. I find that the biggest advantage of an O/R mapper/generator over datasets with procedures: you can generate sql on the fly. This means that the O/R mapper/generator can construct a system for you which completely abstracts away the database and the logic to work with data: you never write SQL anywhere, you just formulate the filters in C# or VB.NET right there when you need it. This has the advantage that in the example of our beloved customer above, you can for example fetch that customer and its orders and its order detail rows in 1 statement, or load them on demand (lazy loading, which is implemented in selfservicing), and the code is already there. With datasets you don't have that. Sure, you can design typed datasets with related tables in the vs.net IDE, generate stored procedures but that will not offer you simple things like 'get me that customer and its last order'. You then have to write that query as a stored procedure, while with an O/R mapper/generator it's there, ready to rock
- Here they have a number of bad experiences with different applications under Citrix and Terminal Server. Are you aware of any problems that might occur with LLBLGEN Pro based clients (Winforms) under Citrix or Terminal Server? Greetings
I don't foresee problems, as the problems that can occur with citrix/terminal server are related to winforms, not the underlying layers. So if winforms work, it will work no matter what type of data you put into the forms, be it datasets or objects.
Joined: 05-Oct-2004
As far as I understand, using Access as a front end (building a form that represents a table and then navigating through the rows), the Terminal Server had huge problems allocating sufficient memory to each session/user. Access would simply load whole tables for each user using an application. Of course it is possible to produce such problems with LLBLGEN Pro classes but it is nearly as easy to avoid. These are the kind of problems they had.
As far as the DataSet is concerned. One more disadvantage is the truly strongtypedness of LLBLGen Pro Classes. Inheritance is possible without much hassle (new attributes or overriding existing ones) This is not really possible in DataSets.
Greetings
eugene wrote:
As far as I understand, using Access as a front end (building a form that represents a table and then navigating through the rows), the Terminal Server had huge problems allocating sufficient memory to each session/user. Access would simply load whole tables for each user using an application. Of course it is possible to produce such problems with LLBLGEN Pro classes
but it is nearly as easy to avoid. These are the kind of problems they had.
Oh memory problems of course I was more thinking of having the rendering wrong / events wrong of controls, some application suffer from that on citrix. As you opt for remoting, you won't use SelfServicing, so memory problems are not likely to occur as all data you need has to be fetched using prefetch paths anyway from the server, so you're more focussed on what you actually fetch.
As far as the DataSet is concerned. One more disadvantage is the truly strongtypedness of LLBLGen Pro Classes. Inheritance is possible without much hassle (new attributes or overriding existing ones) This is not really possible in DataSets. Greetings
Didn't think of that indeed. Related to this: low level business logic in the form of entity validators and field validators plugged into the entity classes are easy but can't be done in datasets.
Joined: 05-Oct-2004
Hi Otis,
I had my first round of discussion with my superior who is very content with his Datasets I would really appreciate any comments as to the propblems / Disadvantages related to the pure DataSet approach and the solutions LLBLGen Pro provides. I tried to show how it is not really possible to inherit from DataSet-based objects and this was pretty ok. What annoyed him was LLBLGen Pros inability to parse the resultset of an SP, as this would be of great help here. Also SPs with more than a single ResultSet are not uncommen here.
Again, I would really appreciate any information from you. I worked once with the DataSets and I don't want to do it again. I believe we would be doing a much better job here using a Tool like LLBLGen.
Greetings
Joined: 18-Oct-2003
Eugene, My experience says do not try to convert them if they are not convinced you will be taking risk if you do that. Definitely Pro has many advantages, just present the advantages before them. If they are not convinced so be it. One way to look at Pro is that we can focus more on the business logic and less on the data access logic by using it. It is Pro's job to get the data for us. Our goal is to implement business logic for the client and not the data logic. So we can make use of Pro. We are currently not using Pro but i always visualize the adv's we could have had by using it. Many times i noticed is we could avoid a lot of time figuring out why we are not getting data correctly. Cloning, Serialization need to be done manually without pro. Typed collections reduce a lot of time and help in performance, otherwise we end up casting a lot of times which is waste. Hope it helps. Thanks.
eugene wrote:
Hi Otis, I had my first round of discussion with my superior who is very content with his Datasets
I would really appreciate any comments as to the propblems / Disadvantages related to the pure DataSet approach and the solutions LLBLGen Pro provides. I tried to show how it is not really possible to inherit from DataSet-based objects and this was pretty ok. What annoyed him was LLBLGen Pros inability to parse the resultset of an SP, as this would be of great help here. Also SPs with more than a single ResultSet are not uncommen here.
Parse the resultset of a proc? What's there to parse? THe proc's results are stored in a datatable (if 1 result set is there) or a dataset if there are more.
If he has concerns, it's indeed a risk to take this up very high, you might lose goodwill or your boss might get angry with you. If he wants to, he can formulate his concerns and ask here or via email and I'll be happy to help.
Btw, Eugene, you have email on your account you registered with on this forum.
Joined: 05-Oct-2004
Hi Otis,
MS DataSets produce a strong-typed class representation of the resultset of an SP. For each column in the resultset, the DataSet produces a member in a DataRow. The other possibilty would be to adress each and every column in the resultset via an index or the name which is less attractive. I actually don't know how MS products are capable of getting this data from the SQL Server without actually executing an SP (which would of course return the result set should an sp have one or delete some rows should the SP be the famous SP_DELETE_ALL_ROWS).
Greetings
eugene wrote:
Hi Otis,
MS DataSets produce a strong-typed class representation of the resultset of an SP. For each column in the resultset, the DataSet produces a member in a DataRow. The other possibilty would be to adress each and every column in the resultset via an index or the name which is less attractive. I actually don't know how MS products are capable of getting this data from the SQL Server without actually executing an SP (which would of course return the result set should an sp have one or delete some rows should the SP be the famous SP_DELETE_ALL_ROWS).
They use a low-level trick which fails in a lot of occasions and which is why I don't use it anymore. It works unless you use temptables and other nasties in your procs, which can confuse the routine they use. In short: there is no reliable way to determine what the columns are of a proc, unless you parse it through and through, which is also not a good technique because procs can be encrypted.
You can do SET FMTONLY ON; exec proc ;SET FMTONLY OFF. This will make the proc be executed but not do anything and can be effectively used to grab schema information. This also fails in some occasions for example in some temptable scenario's
They use a low-level trick which fails in a lot of occasions and which is why I don't use it anymore.
What is this low level trick? Is this it?
You can do SET FMTONLY ON; exec proc ;SET FMTONLY OFF. This will make the proc be executed but not do anything and can be effectively used to grab schema information.
I noticed that you used that in the past - The SQL Server dataAdapters also uses this method - but it also has problems with temptables - Do you have a new method now?
No, it's an OleDb trick I used in the older SqlServer drivers as well, which works not always.
// for all stored procedures found, retrieve their resultset.
base.SubTaskProgressInitHandler(_currentSchema.StoredProcedures.Count);
IDBStoredProcedure currentStoredProcedure=null;
ArrayList procsToRemoveFromSchema=new ArrayList();
for(int i=0;i<_currentSchema.StoredProcedures.Count;i++)
{
try
{
currentStoredProcedure = (IDBStoredProcedure)_currentSchema.StoredProcedures.GetByIndex(i);
base.SubTaskProgressTaskStartHandler("Determining resultset column definitions of stored procedure:" + Environment.NewLine + currentStoredProcedure.StoredProcedureName);
// create command object
OleDbCommand command = new OleDbCommand();
command.CommandText = "[" + currentStoredProcedure.ContainingSchema.SchemaOwner + "].[" + currentStoredProcedure.StoredProcedureName + "]";
command.CommandType = CommandType.StoredProcedure;
command.Connection = openOleDbConnection;
// Create the parameters. These will be filled with NULL.
CreateEmptyParameters(ref currentStoredProcedure, ref command);
OleDbDataAdapter adapter = new OleDbDataAdapter(command);
DataSet resultset = new DataSet("ResultsetSchema");
DataTable[] schemaDataTables = adapter.FillSchema(resultset,SchemaType.Source);
// If no schema was returned, the stored procedure didn't contain a select procedure.
if(schemaDataTables.Length<=0)
{
// no schema
base.SubTaskProgressTaskCompletedHandler();
continue;
}
// there is a schema returned. Analyze it and construct the resultcolumns from that schema.
// OleDb will only return 1 schema, no matter how many select statements are executed in the stored procedure.
// just process the first one.
DataTable schemaDataTable = schemaDataTables[0];
SortedList resultsetColumnsFound = new SortedList(schemaDataTable.Columns.Count);
for(int j=0;j<schemaDataTable.Columns.Count;j++)
{
DataColumn currentDataColumn = (DataColumn)schemaDataTable.Columns[j];
IDBResultsetColumn newColumn = new DBResultsetColumn();
newColumn.ColumnName = currentDataColumn.ColumnName;
newColumn.ColumnNetType = currentDataColumn.DataType;
newColumn.MaxLength = currentDataColumn.MaxLength;
newColumn.OrdinalPosition = currentDataColumn.Ordinal;
newColumn.ReturnedByStoredProcedure = currentStoredProcedure;
resultsetColumnsFound.Add(currentDataColumn.Ordinal, newColumn);
}
currentStoredProcedure.ResultsetColumns = resultsetColumnsFound;
// done
base.SubTaskProgressTaskCompletedHandler();
}
catch(OleDbException ex)
{
StringBuilder exceptionMessage = new StringBuilder();
They do some additional parsing, but as said, that's unreliable as procs can be encrypted.
Joined: 05-Oct-2004
Hi there,
in the documentation it is stated that it is not possible to set the value of a poperty that maps on an IDENTITY column. The example code provided has a setter for such properties, like the OrderID in the Order table in the northwind DB. It is also possible to set this value for an unsaved entity. Of course after saving the entity, the value of the property is the one actually generated by the DB. Am I misunderstanding something?
Another question!!
one thing that MS Datasets provide is their ability to proof data in the memory against DB constraints like Indexes, FKs. As far as I know code generated by LLBLGen proofs upon adding a new element to a collection if this element already exists (which I didn't get to work on a demonstration today). Does the generated code include any mechanisms to do such tests? I am well aware that such tests are no guarantee that an insert or an update (referenced entity is deleted), would succeed at the end. But does LLBLGen generated-code provide any of these mechanisms?
Greetings
eugene wrote:
in the documentation it is stated that it is not possible to set the value of a poperty that maps on an IDENTITY column. The example code provided has a setter for such properties, like the OrderID in the Order table in the northwind DB. It is also possible to set this value for an unsaved entity. Of course after saving the entity, the value of the property is the one actually generated by the DB. Am I misunderstanding something?
Well, it is possible to set the value (entity.Fields[index].ForcedCurrentValueWrite(newValue); ) but not recommended, nor very useful. The code generator doesn't have logic to remove the setter from the property. ReadOnly-ness is not provided by a hardcoded property but by a setting in the field definition. It's not that important really, if you choose to use identity columns, they can't be set, if you want to do that, don't use identity columns
Another question!! one thing that MS Datasets provide is their ability to proof data in the memory against DB constraints like Indexes, FKs. As far as I know code generated by LLBLGen proofs upon adding a new element to a collection if this element already exists (which I didn't get to work on a demonstration today). Does the generated code include any mechanisms to do such tests? I am well aware that such tests are no guarantee that an insert or an update (referenced entity is deleted), would succeed at the end. But does LLBLGen generated-code provide any of these mechanisms?
No they're not build in as they would give you the false sense of 'it 's checked and approved' while it isn't. In-memory checks are only reliable if the checks are done on all the data. Most of the time that's not the case, so checking something is pretty useless if you do it on a subset of the data. You already give an example of why it isn't that useful