Insert / Update query batching

By default the LLBLGen Pro Runtime framework performs each insert and update using a single insert or update SQL query in a single DbCommand object. In cloud environments this can hurt performance as network latency can become quite prominent. To overcome this, the LLBLGen Pro runtime framework additionally supports batching of insert SQL and update SQL statements for Adapter.

Batching is available for normal synchronous execution and async and only requires a single property to enable/disable it. All other persistence logic, e.g. committing a unit of work, stays the same.

Configuring batching and transactions

To configure batching for the work to perform, set the IDataAccessAdapter.BatchSize property to the size of the batch to perform. By default this property is set to 0 which means no batching will occur. Values of 2 or higher mean the batch is of that size, so a batch will contain at most that amount of SQL statements. If more work is available, multiple batches are performed. When BatchSize is set to a value of 2 or higher, a transaction is always started for a save, even if there's just one statement.


There's no check on whether the maximum number of parameters for the target database has been exceeded with a particular batch size, however as commands with a high parameter count are very slow compared to commands with lower parameter counts, it's a good idea to keep batch sizes rather low, below 80 is best in general. Adding 80 or more statements together will create a slower overall query in general than when the queries are send separately, as handing the packed command in most databases has an overhead as well. The supported databases all support a large amount of parameters (with SQL Server being the limiter here with 2100).

Support for batching and the side effects

Batching is in general a good thing but can have side effects or limitations. Below these limitations are described as well as the databases which support insert and update query batching. The system detects these limitations automatically and switches to a non-batching query if required, executed in-between batches. This makes the system automatic and easy to use: you can just set the BatchSize property to a value and the system will check whether the queries are batchable and if so, will pack them into batches and if not, will run them separately.

Supported databases

The following databases support insert / update batching through LLBLGen Pro:

  • SQL Server
  • PostgreSQL
  • Oracle (ODP.NET and System.Data.OracleClient)
  • Google Cloud Spanner

The other databases either don't support select queries merged with update queries, or don't support multiple queries packed into a single statement.

OnSaveEntity / OnSaveEntityCompleted calls

The LLBLGen Pro Runtime Framework has a rich feature set and some takes place around the individual action of an insert or update. The normal queue persistence calls OnSaveEntity() before the execution, passing in the actionquery. In the batching scenario the OnSaveEntity() is called before the SQL query with the insert or update statement is added to the batch query but the query itself isn't executed yet. You might want to augment the SQL query, but you are responsible for proper parameter naming to avoiding clashing.

After the execution of the query, the method OnSaveEntityCompleted is called. For batch queries this is done after the batch query has been executed: for each individual query batched in the executed batch query the OnSaveEntityCompleted method is called.

Grouping of elements

For updates, the batching algorithm will batch all entities to update together in the dependency order in one batch, and will subsequentially create new batches as long as there are updates to perform. For inserts this is different, as inserts can potentially generate new PK values which have to be synced with FK fields in other entities. This has the effect that the batching algorithm will group all inserts in separate batches, and group entities of each entity type together in separate insert batches. The batches per entity type are executed in the order based on the dependency the entity type has with other entity types in the queue. This can lead to batches with a size smaller than the set BatchSize.

Inserts which aren't batchable

  • Entities which have a relationship with themselves, e.g. an Employee entity which references itself over a 'ReportsTo' navigator, won't be batched for insert queries, as the PK-FK sync operation can potentially take place with an entity in the same batch which then isn't complete when the insert query is ran. This means that each of the insert statements is executed as if there's no batching enabled, so one insert SQL statement per command object.
  • Insert queries for a Target-per-Entity inheritance scenario rely on parameter-parameter value propagation, and as such require a DbCommand inserting a supertype to be completed before the DbCommand for inserting the subtype can proceed. This means that inserts for entities using inheritance aren't batched but are ran separately as if there's no batching enabled.

Failures during batch execution

When a query results in a database error, e.g. unique constraint failure, pk constraint error or other error, the error will terminate the batch and as a transaction is started when batching is enabled, will roll back the complete transaction. Additionally, the runtime will throw an exception, in general an ORMQueryExecution exception, and in the case of a concurrency error, an ORMConcurrencyException. The entity included in the exception is the last entity added to the batch or the one triggering the batch. This means it's not directly related to the failing entity, as the query which failed in a batch of potentially hundreds of sql statements is unknown: the database gives an error on one of them, it's unclear which one.

This is a side effect of batching multiple statements together and which might have the consequence you have to deal with the actual database error instead of depending on the entity exception.


// the 'uow' variable is a live UnitOfWork2 instance with various entities added
// to it for insert / update.

using(var adapter = new DataAccessAdapter())
    // set the batch size to 80 queries. This will enable insert/update batching
    adapter.BatchSize = 80;
    uow.Commit(adapter, true);