Batching isn't implemented because it interferes with other functionality we have in the runtime, which works on a per-entity basis, like auditing, authorization and concurrency checks etc.. This means that if batching is used, these features won't work so we don't implement batching. So having more features actually hurts in this case, which is in a way ironic, but unfortunately reality. We did look at implementing it with private reflection hacks like NHibernate imlements it (as the SQL Server batching logic is internal to the SQL client so only MS can use it) but we couldn't integrate it with the rest of the pipeline, as we need feedback whether a single entity was updated/inserted properly, even though you execute them in a 'batch' at a higher level (e.g. unit of work, collection save).
If you're looking for bulk-inserts, it's best to look for the bulk copy import feature of SQL Server, because that import system bypasses the SQL interpreter, making it much faster than executing insert statements.