in reply to Re: Re: Re: Automatic generation of tests
in thread Automatic generation of tests

By all means, tests should evolve while the program is written -- however boundary coverage should be one of the first kinds of things you think about when testing your API. That is what automated testing is for. If your API passes all tests, and the whole of your API is tested, you won't have any issues with coverage...harder said than done.
See the reply to adrianh above regarding internal/algorithmic boundary conditions.

TDD is an *excellent* practice. Didn't I start out with that line? I am only trying to get the point across that it isn't the be-all and end-all of testing, it has limitations.

Steve McConnell's Code Complete 2nd Ed. cites research indicating that while developers think they are getting high code coverage in testing, in reality they get 80% at best, 30% at worst, and around 60% on average. 60% is very, very, very low code coverage. The studies aren't recent, but if anything has changed I suspect it is only that developer confidence has risen, not that test coverage has.

  • Comment on Re: Re: Re: Re: Automatic generation of tests

Replies are listed 'Best First'.
Re^5: Automatic generation of tests
by adrianh (Chancellor) on Mar 05, 2004 at 15:58 UTC
    The studies aren't recent, but if anything has changed I suspect it is only that developer confidence has risen, not that test coverage has.

    No the studies are not recent - and none of them are for people doing TDD. My experience, and the experience of others I've talked to doing TDD, is that code coverage goes way up when you do TDD.

    This isn't really surprising since you should not be writing code with TDD that isn't being exercised by a failing test.

    Now, if only somebody could find the time and money to do some research :-)

    (and just to emphasise that I agree completely that TDD isn't all there is to testing :-)

      However if you start doing arbitrary rewrites of code aimed at breaking the test suite you're no longer doing TDD :-)
      This isn't really surprising since you should not be writing code with TDD that isn't being exercised by a failing test.

      No one is talking about arbitrary rewrites. Refactorings can easily result in green-bar code that wasn't immediately motivated by failing tests and that contains untested branches.

      This doesn't mean that I think more traditional white box testing, branch coverage, statement coverage, etc. are useless - far from it. They're excellent tools and you can find many bugs with them.

      Seems we aren't really in much disagreement after all. My original point was only that too often I've seen TDD adopted "at the expense" of more traditional testing methods rather than as a complementary "design" process.

        Refactorings can easily result in green-bar code that wasn't immediately motivated by failing tests and that contains untested branches.

        Fair point. Misunderstood what you were getting at.

        That said, it's still been my experience that doing TDD and merciless refactoring produces good branch coverage more easily that post-code test writing. My hunch would be that refactorings that introduce untested branches are more than outweighed by those that remove branches or cause duplicate coverage by different test. Just a hunch tho'.

        Seems we aren't really in much disagreement after all.

        No, I don't think so ;-)

        My original point was only that too often I've seen TDD adopted "at the expense" of more traditional testing methods rather than as a complementary "design" process.

        Bad things can and do happen if people jump into a new development style and abandon an old one. The problem I see most with people adopting TDD is, well, not doing TDD :-). It's so very easy for a newbie to write code that "obviously" works, or fail to write tests for something that is hard to test rather than re-writing the code to make it easier to test.

        Traditional testing methods like looking at code and branch coverage can actually be great tools for supporting TDD. Its been my experience that if you've got low code coverage then you almost certainly are not doing TDD properly, so it's great feedback into the process of adopting and maintaining good TDD practices.

        Of course with infinite time and money you do every kind of testing practice that you think will benefit you. Unfortunately that's not a common scenario. My personal experience is that if you have a team doing TDD well then it's better to spend your testing resources in the acceptance/customer tests and in exploratory testing rather than more formal whitebox techniques. As ever YMMV ;-)