Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^2: Does anybody write tests first?

by dsheroh (Monsignor)
on Feb 22, 2008 at 07:22 UTC ( [id://669475]=note: print w/replies, xml ) Need Help??


in reply to Re: Does anybody write tests first?
in thread Does anybody write tests first?

It didn't convince me.

The HTML version of the slides, at least, does a great job of outlining testing strategies, policies, and techniques, but it says absolutely nothing about why to test (unless you count "remember: human testing doesn't scale") or what the benefits of a good testing strategy might be.

Replies are listed 'Best First'.
Re^3: Does anybody write tests first?
by BrowserUk (Patriarch) on Feb 22, 2008 at 08:15 UTC

    Me neither.

    I'm utterly convinced of both the need for, and the benefits of testing during the development process. My argument is entirely aimed at the methodology and tools being advocated and used.

    As for the slides, I got as far as page 12 and 13 and just stopped reading:

    1. Redundancy Is Good.

      No! Redundancy is redundant.

      I'll put that another way. It costs time. And time is money. Redundant effort is therefore, wasted money.

    2. There are no stupid tests.

      Rubbish! Of course there are stupid tests.

      Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid. When left to it's own devices, perl will report a failure to load a module with a concise and accurately informative warning message...and stop. Preventing that stop is pointless. Every subsequent test of that module is, by definition, going to fail.

    3. I did go back and scan on, because I felt I ought to. And I could pick out more, but I won't.

    When testing tools or suites are more complex and laborious than the code under test; when they prevent or inhibit you from using simple tools (like print/warn/perl -de1), they become a burden rather than a help.

    In the car industry (in which I grew up), there are many ways of testing (for example) the diameter of a hole in a piece of sheet metal. You might use a lasar micrometer. You might a set of inner calipers and a set of slip gauges.

    The favoured test tool? A simple, tapered shaft with a handle and clearly marked, upper and lower bounds. You simply poke it in the hole and if it passes the lower bound and stops before the upper, the hole is within tolorance. It is simple to operate, Is extremely robust in use. And takes no time at all to use. It's easy to use, so it gets used.

    The Occam's Razor of test tools. I seem to think that Andy has an appreciation for OZ. He should consider applying it in this case also.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid.

      A simple sanity check before deployment might simply check that everything loads correctly. As you know, use does more than just load. It actually executes code, in some cases. If the environment has changed, the config has changed (as it often does when you go to production), just loading a module might break.

      One might have modules that are rarely used, or loaded dynamically via require. If your other testing isn't very good, at least you can check these rare cases with a quick Test::More::use_ok.

      If someone broke a module such that it doesn't load, I want to show that before I get ten minutes into another (manual) test that depends on it.

      Certainly there are ways to misuse a simple "does this load" test, but they are not without value.

        A simple sanity check before deployment might simply check that everything loads correctly. As you know, use does more than just load. It actually executes code, in some cases. If the environment has changed, the config has changed (as it often does when you go to production), just loading a module might break.

        But don't you see that all use_ok does, is allow the test script to continue when the use has failed. It doesn't test anything extra. It doesn't tell you any more. It doesn't verify exports, or configuration, or tell what piece of code failed. Indeed, if any warnings or errors are produced that might tell you what failed and why, it hides them from you.

        And, unless you are testing more than one module from that test script (which I assume no one does), there is nothing else useful to do, because if the use failed, none of the other tests are viable. Literally all you've done by using use_ok is allow the running of further tests that cannot possible succeed. Oh. And allowing the harness to count more imaginary numbers.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
      Testing that perl can load a module and converting the result into a 0.05% change in a statistic is stupid.

      And yet I've seen it catch real bugs in real code.

        And yet I've seen it catch real bugs in real code.

        Sorry, but you'll have to explain to me how testing that perl can load a module detected a bug, that you would have missed by allowing perl to report:

        Can't locate object method "new" via package "Some::Module" (perhaps you forgot to load "Some::Module"?) at...

        ?


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^3: Does anybody write tests first?
by cLive ;-) (Prior) on Feb 22, 2008 at 07:44 UTC

    "Human testing doesn't scale"? Maybe I should have linked to another of his slide presentations that has more background info than practical code.

    But for me, from experience, leaving smoke running hourly as a cronjob (that emails me if there's errors) when I'm developing has saved me a pile of debugging. Quite a few times, I've assumed something, only to find that that assumption broke another assumption elsewhere in the code.

    I know I'm far too lazy to manually test every time I add new functionality or optimizations, so knowing that I'm going to get an email if there's either something wrong with my code - or that a test is wrong because functionality has changed - is invaluable.

    But, if it's just you on the code and it's never going to grow to become a monster(say 10,000 lines or more), then you might not reap the full benefits, or think it's worthwhile. On the other hand, you might be pleasantly surprised at how much it can help :)

      Saying that "it didn't convince me" in my last post was a deliberate choice of phrasing, because I already was pretty much convinced, even though most of my projects are just me. :) And I absolutely agree that the code in there is a big plus, although personally I wouldn't use the auto-smoke because I do run tests manually, usually (much) more than once an hour, because, if something goes wrong, I find it easier to debug the code immediately after writing it.

      My point was just that the presentation you linked to isn't going to convince anyone that they should use tests because it focuses entirely on how you should test and assumes you already know why you should test, which pretty well negates your earlier comment that "If [this presentation] doesn't convince you of the benefits of a good testing strategy, I don't think anything will".

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://669475]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (4)
As of 2024-03-29 00:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found