Re^3: Automatic generation of tests
by adrianh (Chancellor) on Feb 23, 2004 at 15:23 UTC
|
A green bar at this stage should *not* be taken as a pass, but only an indicator that the code now fullfills its contract. That is insufficient to pass the code. Branch and boundary coverage simply can't be analyzed until after the code is written.
While I take your point, the test-first response would be that if boundary behavior is important then it forms part of the code's contract, and so there should a test for it.
Branch and statement coverage should come out at 100% if you've been doing TDD properly, since you shouldn't have been writing any code that wasn't motivated by a failing test.
Not that whitebox testing and coverage tests are useless of course - but with a team doing TDD they're much more useful as pointers to where areas where TDD is being skipped or done poorly.
| [reply] |
|
|
While I take your point, the test-first response would be that if boundary behavior is important then it forms part of the code's contract, and so there should a test for it.
Boundary tests do not just refer to API or requirements describable
boundaries. You can have an algorithm that makes choices based on
ranges of internally calculated values. Obviously, in TDD, you cannot
predict internal algorithmic branches and range conditions and so you
cannot write tests that will necessarily exercise all internal branches
and boundary conditions. To put it another way, for any reasonably
non-trivial routine that you have what you think is a sufficient set of
unit tests for, someone could reimplement that routine in such a way that
your current test suite still passes but fails to exercise all branches
of the code.
| [reply] |
|
|
Belated response :-)
Obviously, in TDD, you cannot predict internal algorithmic branches and range conditions and so you cannot write tests that will necessarily exercise all internal branches and boundary conditions.
I know my branch and statement coverage stats have been much improved since I started doing TDD. Since you should only be writing code that is motivated by a failing test you are, from a certain point of view, predicting what the code should do when you write the test.
Does this always result in 100% branch coverage? No, of course not. However lack of 100% branch coverage doesn't always indicate a bug either.
To put it another way, for any reasonably non-trivial routine that you have what you think is a sufficient set of unit tests for, someone could reimplement that routine in such a way that your current test suite still passes but fails to exercise all branches of the code.
This is of course true. However if you start doing arbitrary rewrites of code aimed at breaking the test suite you're no longer doing TDD :-)
You're right that doing TDD doesn't produce a test suite that guarantees that arbitrary code meets the requirements. That's not what TDD aims to do. TDD is a design process, not a testing process. The aim of TDD is to produce working code that meets the requirements, not a test suite that exercises arbitrary code.
This doesn't mean that I think more traditional white box testing, branch coverage, statement coverage, etc. are useless - far from it. They're excellent tools and you can find many bugs with them.
However, with a team doing good TDD I find you'll often get more bang for your buck by spending time on acceptance tests and exploratory testing. As ever YMMV.
| [reply] |
Re: Re: Re: Automatic generation of tests
by flyingmoose (Priest) on Feb 23, 2004 at 17:12 UTC
|
It's obvious that tests do not cease (edit: typo FIXED!) to evolve after the code is complete. However, it should be stated that (if you do have a solid API) writing tests first has some advantages. Namely, you can make sure you implemented your entire API and that API works.
Once you start scrolling through hundreds of lines of code, it's hard to visualize your API use cleanly because you start to confuse the API with the implementation. Again, I don't do this nearly enough, but it has great merits and this is something I *should* do for larger projects.
By all means, tests should evolve while the program is written -- however boundary coverage should be one of the first kinds of things you think about when testing your API. That is what automated testing is for.
If your API passes all tests, and the whole of your API is tested, you won't have any issues with coverage...harder said than done.
Naturally, testing code by hand is critical in validing that the tests themselves are valid.
| [reply] |
|
|
| [reply] |
|
|
By all means, tests should evolve while the program is written -- however
boundary coverage should be one of the first kinds of things you think
about when testing your API. That is what automated testing is for. If your
API passes all tests, and the whole of your API is tested, you won't have
any issues with coverage...harder said than done.
See the reply to adrianh above regarding internal/algorithmic boundary
conditions.
TDD is an *excellent* practice. Didn't I start out with that line? I am
only trying to get the point across that it isn't the be-all and end-all
of testing, it has limitations.
Steve McConnell's Code Complete 2nd Ed.
cites research indicating that while developers think they are getting high
code coverage in testing, in reality they get 80% at best, 30% at worst,
and around 60% on average. 60% is very, very, very low code coverage. The
studies aren't recent, but if anything has changed I suspect it is only that
developer confidence has risen, not that test coverage has.
| [reply] |
|
|
The studies aren't recent, but if anything has changed I suspect it is only that developer confidence has risen, not that test coverage has.
No the studies are not recent - and none of them are for people doing TDD. My experience, and the experience of others I've talked to doing TDD, is that code coverage goes way up when you do TDD.
This isn't really surprising since you should not be writing code with TDD that isn't being exercised by a failing test.
Now, if only somebody could find the time and money to do some research :-)
(and just to emphasise that I agree completely that TDD isn't all there is to testing :-)
| [reply] |
|
|
|
|