You added a link to a PR on a repo in your first post, which likely was meant to be a link to our documentation
In any case, entities with relationships to themselves aren't batchable. The main reason is that it can't know if it has to update rows during the query so it skips those, and these are executed individually.
If you save a graph, e.g. Customer, Order, OrderLines, it'll batch queries per entity type, as it uses a topological sort per entity type. Older versions would end up with a queue like Customer1, Order1, OrderLines1, OrderLines2, Customer2, Order2, OrderLines3 ... etc. (where every customer has 1 order). the batching algorithm now packs the customers together, the orders and the orderlines, in such a way that it first sorts types based on dependency and then the instances of these types. (as any customer comes before any order).
Keep batch sizes small, like 80-120 is optimal in most cases, but it depends on the # of parameters generated per entity (so if you have very big entities, lower the batch size). If you use high batch sizes, you get a lot of parameters, and it's likely slower. But as you're working on Azure, the db delays are astronomical compared to anything else, so any extra roundtrip is likely already slower than the DB having to parse 2000 parameters.
Inserting a lot of rows shouldn't take 7minutes tho. Inserting a lot of rows into a DB inside 1 transaction isn't that taxing on LLBLGen Pro (benchmarks show that, we're a couple of ms behind EFCore in this run, but that's basically fluctuating DB performance, looking at the individual run numbers ), but it can be really slow if the DB's files aren't pre-sized. I.e. inserting a lot of rows will make the DB increase the file size of your catalog probably which can be slow and it'll store a row in the .log file of your catalog, and as this file isn't big, it'll take a while too.
So you might want to check where the bottleneck is with a profiler (e.g. use 1000 entities first and profile that)