in reply to Re: OT: TDD question
in thread OT: TDD question

On the contrary, this is not an argument for permitting less than 100% coverage, it's a argument for requiring greater than 100%. If you only exercise each line of code once in your tests, then you almost certainly don't have enough tests.

Your story is just a special case of not testing with enough inputs to your code. A developer wouldn't have to be familiar with this specific feature of mail servers in order to catch this in unit tests.

Replies are listed 'Best First'.
Re^3: OT: TDD question
by Anonymous Monk on Dec 09, 2004 at 18:11 UTC
    ", it's a argument for requiring greater than 100%."

    Let's not be silly. Greater than 100% coverage is impossible. What this is saying that EXTERNAL factors cannot always be covered with TDD... and unfortunately most of my code deals with external factors. Not just black box code, but remote systems and odd code combinations. Firmware. Drivers. Network issues.

    TDD is great for Perl modules and small pieces. It does not adapt well to large scale systems. There you need custom automated test environments that are very domain specific and also require lots of setup and specific hardware configurations (and variations on those). Even then, it's not perfect.

      You've taken that phrase out of context and as a result missed the point, which was that you need to exercise each line of code many times to have a serious hope of finding all or most of the bugs. Just hitting 100% by creating 1 test case for each branch is almost never sufficient.

      When it comes to your argument about external factors, Perl actually has some advantages over more static languages. Need to test rare conditions in a socket connection? Just override the relevant methods of IO::Socket to produce them when you want. The same trick works for lots of things which are otherwise hard to test.

      TDD can work for large systems, because what you do is test the hell out of the small pieces, with the same usage patterns as your full application. This doesn't replace large-scale integration testing, but that doesn't mean that you shouldn't do it. I catch lots of stuff in unit tests that don't get caught in our full-system functional tests (and vice versa).

        I knew what you meant. Perhaps you don't work for a similar industry and with similar software. Different people work on different things, and that's fine.

        Namely, running the same code over and over does not reproduce an external stimulus, and there is no mechanism for writing enough tests to cover even 50% of the possible things that may arise.

        Anyhow, if you have actually written code that simulates network failures and firmware quirks and so forth, good for you, otherwise realize that while testing is a decent piece of things, real companies test against real hardware. Simply put, you do not know what kind of errors will come at you...

        Moral of the story -- The problem of "when have I written enough tests" is about like the halting problem. Simply ensuring each line of code is executed 50 times proves nothing in this case.