in reply to Re: Re: Automatic generation of tests
in thread Automatic generation of tests

A green bar at this stage should *not* be taken as a pass, but only an indicator that the code now fullfills its contract. That is insufficient to pass the code. Branch and boundary coverage simply can't be analyzed until after the code is written.

While I take your point, the test-first response would be that if boundary behavior is important then it forms part of the code's contract, and so there should a test for it.

Branch and statement coverage should come out at 100% if you've been doing TDD properly, since you shouldn't have been writing any code that wasn't motivated by a failing test.

Not that whitebox testing and coverage tests are useless of course - but with a team doing TDD they're much more useful as pointers to where areas where TDD is being skipped or done poorly.

Replies are listed 'Best First'.
Re: Re^3: Automatic generation of tests
by Anonymous Monk on Feb 23, 2004 at 18:41 UTC
    While I take your point, the test-first response would be that if boundary behavior is important then it forms part of the code's contract, and so there should a test for it.
    Boundary tests do not just refer to API or requirements describable boundaries. You can have an algorithm that makes choices based on ranges of internally calculated values. Obviously, in TDD, you cannot predict internal algorithmic branches and range conditions and so you cannot write tests that will necessarily exercise all internal branches and boundary conditions. To put it another way, for any reasonably non-trivial routine that you have what you think is a sufficient set of unit tests for, someone could reimplement that routine in such a way that your current test suite still passes but fails to exercise all branches of the code.

      Belated response :-)

      Obviously, in TDD, you cannot predict internal algorithmic branches and range conditions and so you cannot write tests that will necessarily exercise all internal branches and boundary conditions.

      I know my branch and statement coverage stats have been much improved since I started doing TDD. Since you should only be writing code that is motivated by a failing test you are, from a certain point of view, predicting what the code should do when you write the test.

      Does this always result in 100% branch coverage? No, of course not. However lack of 100% branch coverage doesn't always indicate a bug either.

      To put it another way, for any reasonably non-trivial routine that you have what you think is a sufficient set of unit tests for, someone could reimplement that routine in such a way that your current test suite still passes but fails to exercise all branches of the code.

      This is of course true. However if you start doing arbitrary rewrites of code aimed at breaking the test suite you're no longer doing TDD :-)

      You're right that doing TDD doesn't produce a test suite that guarantees that arbitrary code meets the requirements. That's not what TDD aims to do. TDD is a design process, not a testing process. The aim of TDD is to produce working code that meets the requirements, not a test suite that exercises arbitrary code.

      This doesn't mean that I think more traditional white box testing, branch coverage, statement coverage, etc. are useless - far from it. They're excellent tools and you can find many bugs with them.

      However, with a team doing good TDD I find you'll often get more bang for your buck by spending time on acceptance tests and exploratory testing. As ever YMMV.