Most bug arise as a result of the programmer making assumptions. If the same programmer writes the tests for the code s/he wrote, they will make the same assumptions. The net result is they write tests for every case they considered when writing the code, which all pass--giving them N of N tests (100%) passed and a hugely false sense of security.

I find that this does not happen if you're using TDD. When you only write code by producing a failing test you are forced to challenge the assumptions in your code at every stage. Every time you make something work the next stage is "how do I break this".

I'm still making up my mind about TDD, but for the time being I think I'm leaning towards agreeing th BrowserUk on this one. In my experience the nasty bugs come, literally, from where I least expect them, and this is crucial. There is no hope that I will somehow write a test to catch such a bug, no matter how hard I try, because the best I can do is test those aspects that I regard as potential sources of problem. And in fact, during my recent applications of TDD, some very nasty bugs have arisen despite a rigorous adherence to TDD principles. (These bugs have all become manifest after the system had "aged" a bit and attained a particular—and as it turns out ill-conditioned—state; therefore, all the simple tests that tested functions to produce expected outputs missed these "history-dependent" bugs. I'm beginning to see that the functional programming folks are on to something with their avoidance of assignment and side effects.)

Also, I find interesting the difference between your take on TDD and that described by Kent Beck in his widely cited TDD by Example. Beck uses "test first" only as a precondition for adding functionality to his software. I.e., he says that one should not write any new code in one's application until one has written a failing test that will succeed only after the new code has been written. He makes no mention of writing tests specifically designed to make the software fail. Admittedly, one can view this sort of "stress" testing as a special case of Kent's formulation. Namely, the "functionality" one is adding is general robustness. Still I am surprised that Beck's book puts so little emphasis on this aspect of testing.

the lowliest monk


In reply to Re^7: Self-testing modules by tlm
in thread Self-testing modules by DrWhy

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.