Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re: Does anybody write tests first?

by cLive ;-) (Prior)
on Feb 22, 2008 at 06:37 UTC ( [id://669472]=note: print w/replies, xml ) Need Help??


in reply to Does anybody write tests first?

I used to hate tests, but now I love them.

When possible, I prefer to write the tests first - and yes, it can help you catch a kludgy implementation earlier.

Sometimes, when refactoring older code, I find it easier to do the rewrite first, because I often don't fully understand what I'm refactoring when I start, so my tests would be way off.

Andy Lester has written some great stuff on the joys of testing. I recommend this (HTML version). If that doesn't convince you of the benefits of a good testing strategy, I don't think anything will :)

Replies are listed 'Best First'.
Re^2: Does anybody write tests first?
by dsheroh (Monsignor) on Feb 22, 2008 at 07:22 UTC
    It didn't convince me.

    The HTML version of the slides, at least, does a great job of outlining testing strategies, policies, and techniques, but it says absolutely nothing about why to test (unless you count "remember: human testing doesn't scale") or what the benefits of a good testing strategy might be.

      Me neither.

      I'm utterly convinced of both the need for, and the benefits of testing during the development process. My argument is entirely aimed at the methodology and tools being advocated and used.

      As for the slides, I got as far as page 12 and 13 and just stopped reading:

      1. Redundancy Is Good.

        No! Redundancy is redundant.

        I'll put that another way. It costs time. And time is money. Redundant effort is therefore, wasted money.

      2. There are no stupid tests.

        Rubbish! Of course there are stupid tests.

        Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid. When left to it's own devices, perl will report a failure to load a module with a concise and accurately informative warning message...and stop. Preventing that stop is pointless. Every subsequent test of that module is, by definition, going to fail.

      3. I did go back and scan on, because I felt I ought to. And I could pick out more, but I won't.

      When testing tools or suites are more complex and laborious than the code under test; when they prevent or inhibit you from using simple tools (like print/warn/perl -de1), they become a burden rather than a help.

      In the car industry (in which I grew up), there are many ways of testing (for example) the diameter of a hole in a piece of sheet metal. You might use a lasar micrometer. You might a set of inner calipers and a set of slip gauges.

      The favoured test tool? A simple, tapered shaft with a handle and clearly marked, upper and lower bounds. You simply poke it in the hole and if it passes the lower bound and stops before the upper, the hole is within tolorance. It is simple to operate, Is extremely robust in use. And takes no time at all to use. It's easy to use, so it gets used.

      The Occam's Razor of test tools. I seem to think that Andy has an appreciation for OZ. He should consider applying it in this case also.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid.

        A simple sanity check before deployment might simply check that everything loads correctly. As you know, use does more than just load. It actually executes code, in some cases. If the environment has changed, the config has changed (as it often does when you go to production), just loading a module might break.

        One might have modules that are rarely used, or loaded dynamically via require. If your other testing isn't very good, at least you can check these rare cases with a quick Test::More::use_ok.

        If someone broke a module such that it doesn't load, I want to show that before I get ten minutes into another (manual) test that depends on it.

        Certainly there are ways to misuse a simple "does this load" test, but they are not without value.

        Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid.

        And yet I've seen it catch real bugs in real code.

      "Human testing doesn't scale"? Maybe I should have linked to another of his slide presentations that has more background info than practical code.

      But for me, from experience, leaving smoke running hourly as a cronjob (that emails me if there's errors) when I'm developing has saved me a pile of debugging. Quite a few times, I've assumed something, only to find that that assumption broke another assumption elsewhere in the code.

      I know I'm far too lazy to manually test every time I add new functionality or optimizations, so knowing that I'm going to get an email if there's either something wrong with my code - or that a test is wrong because functionality has changed - is invaluable.

      But, if it's just you on the code and it's never going to grow to become a monster(say 10,000 lines or more), then you might not reap the full benefits, or think it's worthwhile. On the other hand, you might be pleasantly surprised at how much it can help :)

        Saying that "it didn't convince me" in my last post was a deliberate choice of phrasing, because I already was pretty much convinced, even though most of my projects are just me. :) And I absolutely agree that the code in there is a big plus, although personally I wouldn't use the auto-smoke because I do run tests manually, usually (much) more than once an hour, because, if something goes wrong, I find it easier to debug the code immediately after writing it.

        My point was just that the presentation you linked to isn't going to convince anyone that they should use tests because it focuses entirely on how you should test and assumes you already know why you should test, which pretty well negates your earlier comment that "If [this presentation] doesn't convince you of the benefits of a good testing strategy, I don't think anything will".

Re^2: Does anybody write tests first?
by zebedee (Pilgrim) on Feb 22, 2008 at 08:20 UTC
    XP is big on this. Not that I've ever managed it myself ... 8-)

    http://www.extremeprogramming.org/rules.html

    For me I guess it depends on the project, the timelines, the users, the requirements. Usually the users don't know what they want, so it is a constant iterative process of prototypes until they say "that's it!" and when you say you want to start again doing it properly it usually goes quiet ... and the prototype goes into production.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://669472]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (5)
As of 2024-04-23 13:24 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found