Err... you wanted to kill our Oracle testbox?
FOR personId IN 1..20000000 LOOP
INSERT INTO PERSON (PERSON_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER) VALUES (personId, 'FIRST_NAME ' || personId, 'LAST_NAME ' || personId, 'EMAIL ' || personId, 'PHONE_NUMBER ' || personId);
END LOOP;
I definitely don't think 20 million(!) rows are necessary to test a fetch loop which could be tested with 1 (one) row.
(as you're fetching 1 row per time)
Anyway, you should first clean up your testcode:
1) batch-fetch the entities per 100 into a collection, using paging:
order person by id asc, then fetch the 1-100, 101-200 etc. per iteration. This is also more efficient than your current routine.
2) if processing is fast (you didn't include any processing, you just loop, your code in the repro isn't different from any code posted already in this thread), you could keep the connection open in the adapter (you don't do that now), so you don't open/close a connection every time. You're now open/closing a connection for every entity, that's a lot of open/close/dispose calls in your loop.
As I said before in this thread, which you apparently completely ignored: the garbage collector has to be able to clean up allocated objects and your loop doesn't allow that, on the contrary, as fetching 100000 entities will likely take a couple of seconds.
And above all: you said your example code posted in this thread wasn't representative for your real code. Now you post a 'repro case' which tries to insert 20 million rows in a testtable and has code which is an exact copy of the code you already posted here. I'm sorry, but if you want us to spend time on this, don't try to pull these kind of stunts, because I'm not amused by this, to say the least.
We're very willing to help, but there's an end to our patience. I don't need your complete solution in sourcecode. What I want is a representative piece of code of what you're doing. And no, I don't need a reprocase which inserts 20 million rows in our oracle testbox which runs on a virtual machine.
So I'll say it one last time: if you want us to reproduce what you see, send us a REPRESENTATIVE piece of code. Not a tight loop which obviously fails because of the issue I explained above. Nothing is done in the loop, all data fetched is lost. So everything that is allocated is to be cleaned up, however that's not done till there's some breathing room for the GC, OR when memory pressure is high. However the memory pressure trigger often comes too late, because a tight loop can allocate a lot of data in a very short time.
When you monitor the memory consumption with performance counters, be sure to check if the memory is really allocated by the CLR or if it's allocated elsewhere. Remember that you're opening/closing a connection for every entity, which happens about 20million times in your loop. As said above, ODP.NET is build on top of the java CLI, and even though we call Dispose() on all ODP.NET's tiny little objects, it's still is up to Oracle to clean up the memory. We call dispose on parameters, command objects and connections, i.e. all the objects involved.
(edit)
According to your stacktrace, the exception of no more memory occurs inside the CLI (client interface, the java code) during fetching of the data, not inside the CLR. It wouldn't surprise me if the Dispose() calls to the ODP.NET objects have little effect because the java VM GC hasn't been able to clean up much of the allocated objects.