LlblGen Beta Process

Posts   
 
    
arschr
User
Posts: 894
Joined: 14-Dec-2003
# Posted on: 29-Aug-2007 15:03:20   

Frans, I'm writing this not to criticize, but to try to help improve the process in future releases. Also remember, I have no visibility into the Solutions Design internal release process.

A short while before v2.5 was released, Frans announced his intension to release in the beta forum. My internal reaction "was wow, there are so many features, I've not even had time to look at, much less get familiar with yet." But I didn't say anything. Now a short while after release there are 2 or three "features/problems" identified, that won't be added or fixed because to do so would break the v2.5 contract. I expect that as time passes a few more may be identified as "real" people in there "everyday" work really start to exercise the code. I'm not saying that this can be eliminated, but I would be good to make it as infrequent as possible.

In the v2.5 beta process, I didn't see any verification that both new and old features had somebody actually write code that used them. I realize that the process is staffed largely by volunteers.

Consider for future betas, a public list of things (such as specific aspects of features) that can be checked off as tested by individual users as the beta proceeds. Adding something like this to the beta process, would have at least 2 benefits. First on a volunteer basis people could see areas that need testing and could focus on those, second, release could be, partially, based on having users verify that they have used/tested features.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39797
Joined: 17-Aug-2003
# Posted on: 29-Aug-2007 19:13:24   

arschr wrote:

Frans, I'm writing this not to criticize, but to try to help improve the process in future releases. Also remember, I have no visibility into the Solutions Design internal release process.

No problem, Thanks for taking the time to write this. In the past 4 years that LLBLGen Pro is now on the market (started in jan. 2003, first release in sep 2003), we had several beta periods and they have been sometimes a success (v2.0) and sometimes a big disappointment (v1.0.2005.1 and also somewhat with v2.5).

The sense of disappointment is largely fed by the apparently lack of interest during a beta-period by customers who jump onto the new version when it is released and then run into corner cases of the features sometimes. This is frustrating, both for us and for the customer as the customer can't get something changed and we can't push the change into the current version anymore... However it's also understandable from the customer's Point of view. So we don't blame the customers, it's just a situation you don't want to have. simple_smile

So this last beta period, which was the longest in the history of LLBLGen Pro versions I think (this was the 8th major release in 4 years), was more or less of an eye-opener for two things: 1) feature discussions with everyone here isn't useful. It has to be a selective group of people 2) beta-testing by everyone isn't useful either: you need dedicated beta-testers to weed out design flaws and corner cases which didn't pop up in code-reviews and testing. Not because beta-tests by a lot of people isn't useful, but because you can't rely on beta-testers to test EVERYTHING.

A short while before v2.5 was released, Frans announced his intension to release in the beta forum. My internal reaction "was wow, there are so many features, I've not even had time to look at, much less get familiar with yet." But I didn't say anything. Now a short while after release there are 2 or three "features/problems" identified, that won't be added or fixed because to do so would break the v2.5 contract. I expect that as time passes a few more may be identified as "real" people in there "everyday" work really start to exercise the code. I'm not saying that this can be eliminated, but I would be good to make it as infrequent as possible.

I don't think they can be weeded out, simply because the scope is too large: there are too many features one can use in too many different situations, so overlooking everything and proving (I like to proof my code is correct, instead of rely on brute force unittesting (we do that too, no worries wink ) which could miss situations). So there will always be a user who runs into a case where things aren't possible and when the situation is explained it's obvious that the case is logical and should be possible. We currently have one of this: the easy serialization of own data in a collection during fast serialization where the collection itself is empty.

The cases you refer to are the above case and a related one. Also an issue brought forward which could be solved with an internal API change (but also a possible version nightmare between DQE version and ORMLib version) is addable to that list.

That list will always be there, however it's not the end of the world: in many (as in all the cases currently known) cases it's about corner cases which could be worked around easily, however it's UNfortunate that they're mentioned now as 3 weeks earlier we could have changed the code a bit to make it work.

In early versions, like 1.0.2003.2 and 1.0.2004.1, we simply released the update to the API as a new build and added a line to the readme with "Watch out!" simple_smile . We've learned that that's not the way to go (MySql still does this though for example) as it's a big burden for customers to keep track of build numbers especially if you have to manage a lot of installed applications build with a given version. This sometimes went wrong with runtime exceptions as the result, posted here for us to puzzle out what went wrong.

In the v2.5 beta process, I didn't see any verification that both new and old features had somebody actually write code that used them. I realize that the process is staffed largely by volunteers.

Consider for future betas, a public list of things (such as specific aspects of features) that can be checked off as tested by individual users as the beta proceeds. Adding something like this to the beta process, would have at least 2 benefits. First on a volunteer basis people could see areas that need testing and could focus on those, second, release could be, partially, based on having users verify that they have used/tested features.

When we add a feature, we first write a small doc which describes the feature, do pre-analysis of the impact of changes. Then we'll code the change in the library, designer and/or templates, then write tests to test the feature (if applicable).

Unit-tests can only test the situations you anticipated. So often I try to proof if the code is correct, by re-reading the code, check what's going on etc. This has resulted in overall a solid piece of software but there are always situations thinkable where a feature doesn't work as expected. Often it's fixable with a patch, but sometimes the feature simply doesn't cut it in these situations.

In THOSE situations it would have been very handy indeed if someone up front would have thought of that. I like the idea of a feature board where features are examined by the actual users (you! simple_smile ) to see if they would work or not. However, it's not always possible to get the results you want, as the user participating in the beta might not be the one necessary to weed out a design flaw popping up in a given corner case. An example where it went OK is the discussions about the auditing and authorization details. These have helped for a great deal to make it the cool features as they're now.

Thanks for the idea, I think we'll definitely will try something like that in the fall when we'll present DSL's (text) for entity definitions and mappings which will the basis for v3 and which will be presented as a testbed in the fall for import/export the project file to textual form for v2.5 projects so people can play with the languages and see if they work or not.

Frans Bouma | Lead developer LLBLGen Pro