Stream a field

Posts   
 
    
Ian avatar
Ian
User
Posts: 511
Joined: 01-Apr-2005
# Posted on: 27-Oct-2005 19:37:29   

Hi,

If I store a big file in a DB column, say 20meg, is there a way that I can stream the data into memory instead of loading all at once?

And also stream it back into the DB...

Cheers, I.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39933
Joined: 17-Aug-2003
# Posted on: 28-Oct-2005 11:14:32   

No, this is currently not supported.

Frans Bouma | Lead developer LLBLGen Pro
Ian avatar
Ian
User
Posts: 511
Joined: 01-Apr-2005
# Posted on: 28-Oct-2005 21:45:29   

It think it would also be useful to be able to stream an EntityCollection into memory like a DataReader does.

I could use this for sending out emails to everyone on an email list.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39933
Joined: 17-Aug-2003
# Posted on: 30-Oct-2005 13:50:15   

Ian wrote:

It think it would also be useful to be able to stream an EntityCollection into memory like a DataReader does. I could use this for sending out emails to everyone on an email list.

instead of fetching them all first, you mean? I toyed with the idea of having an EntityReader object, however except browsing through large sets of objects without using paging, I couldn't find another situation in which this would be useful, simply because IMHO it's always better to disconnect a.s.a.p. from a db and not leave open connections.

So in your situation I'd use paging.

Frans Bouma | Lead developer LLBLGen Pro
Marcus avatar
Marcus
User
Posts: 747
Joined: 23-Apr-2004
# Posted on: 30-Oct-2005 14:15:17   

Otis wrote:

Ian wrote:

It think it would also be useful to be able to stream an EntityCollection into memory like a DataReader does. I could use this for sending out emails to everyone on an email list.

instead of fetching them all first, you mean? I toyed with the idea of having an EntityReader object, however except browsing through large sets of objects without using paging, I couldn't find another situation in which this would be useful, simply because IMHO it's always better to disconnect a.s.a.p. from a db and not leave open connections.

So in your situation I'd use paging.

Isn't paging unsafe here? If you get page 1 and start sending emails and meanwhile a customer on page 1 is deleted, page 2 will now contain the wrong data as the first expected row is actually now on page 1 (because of the delete)...

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39933
Joined: 17-Aug-2003
# Posted on: 30-Oct-2005 15:03:57   

Marcus wrote:

Otis wrote:

Ian wrote:

It think it would also be useful to be able to stream an EntityCollection into memory like a DataReader does. I could use this for sending out emails to everyone on an email list.

instead of fetching them all first, you mean? I toyed with the idea of having an EntityReader object, however except browsing through large sets of objects without using paging, I couldn't find another situation in which this would be useful, simply because IMHO it's always better to disconnect a.s.a.p. from a db and not leave open connections.

So in your situation I'd use paging.

Isn't paging unsafe here? If you get page 1 and start sending emails and meanwhile a customer on page 1 is deleted, page 2 will now contain the wrong data as the first expected row is actually now on page 1 (because of the delete)...

hmm, you're right. Though, streaming isn't safe either: if you pull the data in query 1, and another query removes a row at position 1, you will still see that row, because the datareader caches results on the server.

However, you won't miss data, as you would with paging, totally agreed. I didn't think of that.

Frans Bouma | Lead developer LLBLGen Pro
Ian avatar
Ian
User
Posts: 511
Joined: 01-Apr-2005
# Posted on: 30-Oct-2005 18:07:35   

I think paging could be tightened up to be used in this situation.

If there's an identity field in the table, you could order by it and then save the id of the last record in a page.

Then for subsequent pages you'd filter such that all rows had a higher id than the last record of the previous page.

So if someone removes an earlier record, the first record of the next page is not going to fall down a page and if somone adds a new record, it'll get added to the end of the set.