- Home
- General
- General Chat
code testing
Joined: 15-Oct-2004
I am in the process of putting some guidelines and standards for code-testing in my team and wanted to see what the people on this forum think about the following issues:
1- should we use VS's unit testing or some other unitTesting framework (nUnit or any other)?
2- Should we consider Mock-objects part of our testing standard? if yes, should we create our mock objects manually or get a 3rd party library (like TypeMock)?
3- Any guidelines or documentations avalaiable online to as "what to test when writing a unit test" as I am worried from over-testing (for the lack of a better word)?
Joined: 17-Aug-2003
omar wrote:
I am in the process of putting some guidelines and standards for code-testing in my team and wanted to see what the people on this forum think about the following issues:
1- should we use VS's unit testing or some other unitTesting framework (nUnit or any other)?
Well, one thing you should look for is continuation. NUnit is more or less sidetracked and overclassed by MbUnit. Of course, it still works, though if you need more sophisticated testing attributes, you're out of luck. Nevertheless, it's mature enough to serve most scenario's.
With VS.NET's testing framework you'll get a v1.0 product. It's not as mature as you want, and what's worse: if there's a bug, you won't get a fix for it easily. The problem is that the attributes used by vs.net's framework aren't compatible with mbunit's for example. Also, it's expensive: with mbunit/testdriven.net (I'd go for that) you can use vs.net professional and use the free mbunit stuff or testdriven.net (which costs a little), however with vs.net 2005 you've to buy an expensive version if you want to give your developers AND testing AND all the other features.
2- Should we consider Mock-objects part of our testing standard? if yes, should we create our mock objects manually or get a 3rd party library (like TypeMock)?
I never used Mock objects as I don't believe in them, but if you need to, use anything that works for you. Mock objects are mostly used to be able to write the tests first and then fill in the mocks with real code. My personal opinion is that it promotes software engineering at the keyboard, something which is IMHO a bad thing.
3- Any guidelines or documentations avalaiable online to as "what to test when writing a unit test" as I am worried from over-testing (for the lack of a better word)?
I think that depends on who you ask. A TDD advocate will say you've to write your tests first, using mocks, then replace the mocks with real code and you're done. I.o.w. the tests you write up front are tests to see if the functionality works and the interfaces are designed the way they should be. A kind of dogfooding with virtual food.
A more realistic person (I'm not in the TDD advocate camp ) will say there are 2 kinds of tests: 1) interface correctness tests 2) functionality tests, or better: use-case tests.
The 1st category is simply a set of tests which tests the public interfaces of your api (class interfaces). So does method Foo do what it should do when input is A, B C? These are artificial, and should provide a guarantee that when you refactor your internal code, they will find change in interface behavior when you run the tests after the refactoring.
The 2nd category is a set of tests with code like an API user would use it. This could be yourself using the API in a tier on top of the API for example. These tests perform tasks, like 'save updates to an entity graph into the db'. You then first have to insert entities into the db, fetch them, change them, check if the changes happened in-memory, save the entities, check if the changes were made etc.
What's odd is that a lot of people write unittests but for category 1. This is 'ok', but hardly proofs your code is bugfree or correct. Having unittests in category 2 is IMHO better, as it tests how the users would use the code.
So I'd go for tests in category 2) and for code where you don't have a category 2) test for, add a category 1) test. This covers both the use cases for your users and also checks if you made a mistake during refactoring of internal code.
With code coverage tests you can see if your category 2) tests cover enough to drop the 1) tests. Though it's hard to draw conclusions on code coverage results: if a method has 5 different paths, you should have 5 different unittests to test the same method. However you might get 100% coverage with 3.
Personally, I'm more for this order: - design the functionality. This isn't a whole app, just a part of the functionality. Design it, on paper/word etc. but not in code. - if required, do functional/technical research, so you won't run into surprises while writing code. - write the code, which should be straight forward, as you already have the design and the research at hand. - write tests to see if you've done things right: 1) category one tests to see if methods break. 2) category two tests to see if your designed functionality is indeed a) working and b) implemented correctly. Make no mistake: category 1) tests don't cover 'implemented correctly'. If some functionality covers 20 method calls, it needs a test in category 2.
I cooked up these 2 categories, it might be that there are more. What I want to illustrate is that test tools etc. are precisely what they're called: tools, like any other tool you might use: they're not a way of live, they're tools, so use them when needed, and don't use them when not needed. Because at the end of the day, 'Agile', 'XP', 'TDD' and all the other acronyms cooked up by bookwriters and speakers (so they can keep a living) all comes down to Common Sense Software Engineering (CSSE), a term I cooked up last week. Think before you do, that's simply what's required to get the best results.
Recently I met an architect (you know, designing buildings ) and I'm planning to discuss with him how he handles requests for change, which is the root argument of Agile-TDD advocates to promote their method. I mean: architecture is an old profession, they've dealt with this problem for ages: if a building has to be build and during development new doors have to be inserted or rooms have to be 'refactored', that's not (always) possible. However in softwareland, it has to be possible and it's a 'problem'. I fail to see why this is different in software land and thus why software engineers have to re-invent methods to overcome problems which are solved in other professions for ages already.
With 'solved' I mean a couple of things: there's a solution, the problem is mitigated, the problem has a workaround.
Joined: 26-Oct-2003
My $.02 here. I decided to try the agile methodology on my latest project. I've learned a bit from it. I do like using mock objects for unit testing (Category 1 in the CSSE Methodology ). I think it'll save you quite a bit of time coding and you can set up return values, set expectations, order of calls, etc. I use RhinoMock and am very happy with it. It's free, and it uses strongly-typed type arguments instead of string typenames to create the mocks.
I don't like the religious adherence to TDD as in you must test everything. There's a point of diminishing returns somewhere and I think, as Frans said, you gotta use your head to figure out where that is. Really, get the complex stuff and do as much of the simple stuff as you think is valuable. The less you do, the less agility you have when making changes, so the probability of change needs to be factored in.
Finally, I don't agree too much with the concept of emergent design that hardcore Agilists (is that a word?) tend to promote. You've gotta do some upfront design, but recognize that, when producing non-shrinkwrap software especially, the requirements are going to change which will significantly affect your design.
I think the best takeaway I've found in the process is to not abstract to soon. Don't build frameworks and create abstractions for the future, unless there is a clear reason to do so. If there's two variants on a process, then hardcode it, be done and save yourself the amount of time you would have spent on creating a framework to support that possible third variant. If there's more than two variants that you know of, go ahead and create the framework, but make sure it's cheap enough against the probability of additional variants.
I think that if you abstract as late as possible, keep your code as loosely coupled as possible, following the Single Responsibility Principle, unit-testing the high value areas, and finishing up with integration/user-process testing (Category 2 in the CSSE Methodology) you're agile enough.
Oh, and one other thing I've found that works well for me is Dependency Injection. It's another way of staying loosely coupled (and helps tremendously with unit testing) by allowing you to limit your design time dependencies to interfaces only and letting you specify the implementation at runtime. Jeremy Miller'sStructuremap is nice, but currently has problems with generics. I've heard Castle Windsor is great.
Jeff