In the past, I've meditated on the fact that I've been really hopping onto the TDD (Test-Driven Development) wagon. So, when stvn asked me to write Class::LazyLoad, I wrote a test, wrote some code, wrote a test, wrote some code, etc.

Except, it didn't work like that. I'd say:

  1. Let's start on feature XYZ
  2. Let's write the first test for the feature
  3. Yep, the test fails
  4. Let's write some code that passes the test
  5. Yep, the test passes

Now, at this point, I realize that before I get much further, I have to write like 3 more tests to get anywhere close to Devel::Cover's strident call. And, that's just with the minimum needed to satisfy the first test!

Has anyone else encountered this? What do people do?

Being right, does not endow the right to be rude; politeness costs nothing.
Being unknowing, is not the same as being stupid.
Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

Replies are listed 'Best First'.
Re: OT: TDD question
by dws (Chancellor) on Dec 05, 2004 at 05:52 UTC

    Has anyone else encountered this? What do people do?

    I write the extra tests.

    I've found it to be pretty common when doing TDD that I have to write about 3x as many tests as Ifirst think in order to get to the payoff. It's frustrating at first, but then you might realize how relatively smoothly the work flows, and you get used to it.

Re: OT: TDD question
by stvn (Monsignor) on Dec 05, 2004 at 16:54 UTC

    The central point of TDD (at least as I understand it) is to write tests which verify your assumptions about what your code will do. This way you have a way of verifying that your code behaves as you would expect it to.

    Now, at this point, I realize that before I get much further, I have to write like 3 more tests to get anywhere close to Devel::Cover's strident call. And, that's just with the minimum needed to satisfy the first test!

    Thats the thing about Devel::Cover, it shows you exactly what you test code is running. Testing your "assumptions" will likely not test all your code, and this is where Devel::Cover comes in.

    Personally, my approach is not true TDD. I like to write some code before I write tests so that I have something to work off of. At that point I proceed going back and forth between writing code and tests, sometimes tests first, other times code first depending upon where my inspiration is taking me. I still consider this TDD since test writing is still an integral and driving part of my process.

    Only after I feel like I have something reasonably completed (not finished, but complete for that stage of development) do I run Devel::Cover. At that point I start writing tests again to fill in the missing coverage.

    Now the reason I wait until (almost) the end to run Devel::Cover is that many times code which may not be run (covered) in one set of tests, will be run (covered) with another. So many times it may not make sense to write tests to cover that code at that point anyway.

    The process of writing tests for code which is not covered under my basic "assumptions" usually leads to several things.

    1. I realize ways in which my code can be used/abused that I have never thought of. This sometimes leads to adding error checks, or even re-writing whole sections of code to be more in line with my assumptions.
    2. I sometimes find some unreachable code. Sometimes I realize I can delete this code, and other times I realize that I need to re-write things so that the code is actually reachable.
    3. I find lots of fodder for the CAVEATS section of my documentation :)
    The level of insight into my running code which Devel::Cover provides, is IMO what makes it such a valuable tool. I have found that since I have been using Devel::Cover more, the coverage % on the first Devel::Cover run has gone up. So in a way it could be said that Devel::Cover has helped me become a better test-writer and therefore a better programmer.

    -stvn
Re: OT: TDD question
by samtregar (Abbot) on Dec 05, 2004 at 04:14 UTC
Re: OT: TDD question
by hardburn (Abbot) on Dec 05, 2004 at 22:35 UTC

    Getting 100% coverage is of dubious utility. I'm usually happy if I'm >85%.

    A while back, I was writing an important new application for my company. It would often send out e-mails, and I had coded the mail-sending module to send e-mail via our mail server, with the user's e-mail address on the To and/or From lines. While testing it, we had always used e-mail addresses within our own domains.

    On release, no e-mail was being sent. I soon figured out why: the mail server considred the web server an external host, and since it was (correctly) not configured to be an open relay, it would drop the mail when the domain portion of the From address wasn't one of our domains.

    If we had been using TDD (which we aren't due to political reasons), we potentially could have shown 100% coverage with Devel::Cover and never caught this bug. The e-mail addresses used in the tests would have almost certainly been internal ones. Only a developer experianced with this Gotcha would have caught it before release.

    "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

      100% code coverage only means all your code has been run, not that it has been tested. There is a huge difference.

      I read a good paper called "How to misuse Code Coverage" a while ago that discusses exactly your point. It is available here as a PDF. Its worth a read if you haven't read it already.

      Personally, I shoot for 95% and above and I am happy. As I described above, I try not to worry about coverage until the end, that way I am concentrating on testing my code and not covering it. And lets not forget too that unit-tests only test isolated chunks of code. Integration tests are just as important (if not more).

      -stvn

      On the contrary, this is not an argument for permitting less than 100% coverage, it's a argument for requiring greater than 100%. If you only exercise each line of code once in your tests, then you almost certainly don't have enough tests.

      Your story is just a special case of not testing with enough inputs to your code. A developer wouldn't have to be familiar with this specific feature of mail servers in order to catch this in unit tests.

        ", it's a argument for requiring greater than 100%."

        Let's not be silly. Greater than 100% coverage is impossible. What this is saying that EXTERNAL factors cannot always be covered with TDD... and unfortunately most of my code deals with external factors. Not just black box code, but remote systems and odd code combinations. Firmware. Drivers. Network issues.

        TDD is great for Perl modules and small pieces. It does not adapt well to large scale systems. There you need custom automated test environments that are very domain specific and also require lots of setup and specific hardware configurations (and variations on those). Even then, it's not perfect.