Fetch a PrefetchPath in DataSet

Posts   
 
    
Max avatar
Max
User
Posts: 221
Joined: 14-Jul-2006
# Posted on: 26-Sep-2006 15:38:46   

I need to load a PrefetchPath in a DataSet. The idea is to have a DataTable for each entity. I need that because our reporting components didn’t like EntityCollection/Subcollection structure disappointed , they only work with DataTable/DataSet.

If I remember correctly, when I fetch an entity/entitycollection LLBLGen load the data for each branch of the prefetch path through a DataReader. And after that transform this data in entity instance. Maybe it’s possible to load the data of each PrefetchPath directly in a DataTable, and put all DataTable in a DataSet. So I can have a FetchEntity/ FetchEntityCollection that fill a dataset, instead that an Entity/EntityCollection. Obviously this will be read-only data, like dynamicList.

I’m asking that because it will be nice to use the same load-mechanism (PrefetchPath) even for the reporting engine of our application. And use a PrefetchPath is simpler that define a dynamicList for each table I need in my report.

Thanks, Max

Jessynoo avatar
Jessynoo
Support Team
Posts: 296
Joined: 19-Aug-2004
# Posted on: 26-Sep-2006 17:07:34   

Hi,

I don't think it will be simple because all the operations run by the data access adaptor in the process of prefetchpath fetching involve manipualating entities and collections.

I can see two strategies: Either you let the adapter perform the fetch, and transform the resulting entity tree into you dataset, or you try to intercept the various queries and build your dataset progessively.

In both cases I would go for Entity Projectors. DataProjectorToDataTable should make it. You can even design TypeList quickly to avoid having to build your DataTables (TypedList inherits from DataTable).

Now if you want to build the dataset during the prefetch process, you may override in a dedicated adapter the MergeNormal and MergeManyToMany virtual methods to intercept all the inner collections and project them.

Hope that helps

Max avatar
Max
User
Posts: 221
Joined: 14-Jul-2006
# Posted on: 26-Sep-2006 18:08:45   

Jessynoo wrote:

Hi,

I don't think it will be simple because all the operations run by the data access adaptor in the process of prefetchpath fetching involve manipualating entities and collections.

I was thinking about a modification so that the data access adaptor silpmy load all query in datatable, without working on entitycollection/entity

Jessynoo wrote:

I can see two strategies: Either you let the adapter perform the fetch, and transform the resulting entity tree into you dataset, or you try to intercept the various queries and build your dataset progessively.

I've already done a simple procedure to recursively conver an entitycollection+subcollection in a dataset, but the problem is that is inefficient. Also I can't load 100'000 entity + subentity... I don't have 1 TeraBytes of Ram simple_smile 100'000 row are a realistic number in our reporting application, and it's easly managable with datatable.

Jessynoo wrote:

In both cases I would go for Entity Projectors. DataProjectorToDataTable should make it. You can even design TypeList quickly to avoid having to build your DataTables (TypedList inherits from DataTable).

I was looking for a flexible way of using prefetch path to load a dataset. I can't load the entitycollection because of the number of entity loaded.

Jessynoo wrote:

Now if you want to build the dataset during the prefetch process, you may override in a dedicated adapter the MergeNormal and MergeManyToMany virtual methods to intercept all the inner collections and project them.

Hope that helps

This is interesting... simple_smile

Thanks, Max

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39927
Joined: 17-Aug-2003
# Posted on: 26-Sep-2006 18:18:30   

If you think 100,000+ rows in a datatable will be manageable, I think you're mistaken. Is it necessary to pull that much data outof the db first before starting the processing of the data? Can't you run aggregates , groupby's and expressions on the data inside the db using a dyn. list first and use that data as a startingpoint for your reports?

Also, you could look into fetching each prefetch path node into a separate datatable, building the filters for each datatable fetch yourself, using the knowledge of the filter of the parent. It's a bit of work, but it might work.

Frans Bouma | Lead developer LLBLGen Pro
Max avatar
Max
User
Posts: 221
Joined: 14-Jul-2006
# Posted on: 27-Sep-2006 09:20:19   

Otis wrote:

If you think 100,000+ rows in a datatable will be manageable, I think you're mistaken. Is it necessary to pull that much data outof the db first before starting the processing of the data? Can't you run aggregates , groupby's and expressions on the data inside the db using a dyn. list first and use that data as a startingpoint for your reports?

Probably you're right. But I was trying to maintain the same data_loading_definition_systems for different type of report simple_smile

Otis wrote:

Also, you could look into fetching each prefetch path node into a separate datatable, building the filters for each datatable fetch yourself, using the knowledge of the filter of the parent. It's a bit of work, but it might work.

Maybe I'll do in this way sunglasses But I want to be sure that don't exist simpler way to achieve the same results. simple_smile

Thanks, Max