- Home
- LLBLGen Pro
- LLBLGen Pro Runtime Framework
Client List and more...
Joined: 07-Oct-2006
Does SD have a list of the big customers (Fortune 500, etc) customers that are using (or have purchased) LLBLGen?
I'm having a hard time getting LLBLGen into my current client organization because of DBA roadblocks put up because of the whole Stored Proc vs param'd query concerns. I thought that having a list of big clients might help alleviate some management concern.
Also, I can't find a concrete Microsoft article, MSDN doc that explicitly says "Yep, param'd queries and stored procs are more or less equivalent in terms of speed" for the types of operations (Entity persistence) that we'd be using LLBLGen for.
Frans, I have spent hours searching the web, and all I find is opinions from lots of people (including you of course), but nothing solid from Microsoft aside from your previously quoted SQL BOL:
"SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans."
Unfortunately this doesn't explicitly say that other SQL Statements may be just as fast as stored procs. And you can imagine that DBAs who aren't interested in seeing "our" point of view will not make the same interpretation from the above quoted information.
Any chance you can poke someone at Microsoft about this? (Or point me to some documentation that I've overlooked?)
That quote from the BOL describes exactly what's been done: every query, parameterized, is compiled into an execution plan, and this is the same as every proc which is run. I.o.w.: the SQL inside a proc and the sql send to the db as a parameterized query are both compiled when the query is run and the compiled execution plan is then cached.
When the query is ran again, i.e.: the proc is called again or the parameterized query is send again, the sqlserver optimized is going to check whether a cached execution plan is present. If so, it pulls that plan and runs it again (besides recompile requirements of course)
So i.o.w.: it doesn't matter. This is all in the BOL where execution plans are described. If your DBA still denies it, I think it's a lost cause, as there's no other source than what's in the BOL which is very clear and understandable what's happening.
We have several fortune 50 companies using LLBLGen Pro on very large systems, including several of the largest banks in the world, with systems with over 1000 entities, some even over 2000 entities. If your DBA insists that his/her SQL is faster 'because it is stored as compiled procs', I can't help you as your DBA is then just stubborn.
Dynamic SQL is generated for the purpose and data available. This means that update queries can be faster, as updates with just 1 field for example don't require a special proc or don't require COALESCE using update procs with tests on nullable input parameters. Prefetch paths where fetches are performed of graphs of entities can only be done in dyn. sql as the sql is tailored towards the data of the parents so it can optimize the queries for the child entities to fetch. It would otherwise require a tremendous number of different procedures, and it's then not tuneable.
We've been dealing with these "stored procedures are faster" myths for over 5 years now, and in general, if the DBAs keep insisting this is true, it's not really going to work out: DBAs should know this information already, a DBA which claims that stored procedures are stored in compiled form is not a DBA, as s/he doesn't know the facts about the DB s/he works with. So let's assume the DBA knows this info, and if the DBA still insists it's true, something else is the problem, and it's often the case that the DBA is simply afraid that using an O/R mapper will cause a big problem because the developers have no clue about writing queries and the queries will suck and bring things down.
Therefore, I always advice that the DBA should be the consultant for the developer: the developer writes code which will create queries at runtime. The DBA will likely check these and will see bottlenecks, and then can advice the developer what to do. ALso, the DBA should work WITH The developers to see where things might get problematic, i.e. a lot of inserts on a table with lots of selects etc. where are indexes needed?
If the DBA wants to keep that all for him/herself, the developer can't really write software on top of it, as the DBA apparently doesn't want to work with the developer to create the best possible software FOR the organisation BOTH work for.
So, if the DBAs aren't convinced they should work WITH the developers, (and that means 2 ways: the developers have to work with the DBAs), but instead want to build a fortress around their world and everyone who even looks at them is denied access, is going to be a disaster project from the beginning, whatever Data access solution you'll pick.
We have thousands of companies as customer in over 60 countries, and many of them are rather big (fortune 500 or fortune 50), and among them a lot of banks and government departments, which are often considered conservative in their dataaccess choices (big iron + procs).
Though, I dont think that all will convince a DBA that stored procs indeed are only compiled at runtime and they're now very willing to co-operate and work together with the developers: they should realize (also the developers should realize that, DBAs aren't the enemies) that everyone works for the same organisation and works together to get the best result possible for that organisation. If people find their own kingdom inside that organisation more important than the organisation itself... it's a lost cause.