Qiang has asked for the wisdom of the Perl Monks concerning the following question:

in another word, how do you know that you have tested enough?

new to testing, I just started writing test for the webapp i wrote, using Test::More and Test::WWW::Mechanzie (i wrote the webapp first. otherwise would be interesting to try test-first before writting code). with corner cases and bad inputs, i feel that i have tested the webapp enough ( so it works the way it should ) along with T::W::M testing, i also test module requirements, configuration file. so far it has been joyful.

however, based on what i read from here and perldoc, i seem to missing unit test. i.e test each function and module API. I think Test::Tutorial has an example on this. i also read this:

Bill Venners: When do you stop writing tests? You say in Refactoring, +"There's a point of diminishing returns with testing, and there's a d +anger that by writing too many tests you become discouraged and end u +p not writing any. You should concentrate on where the risk is." How +do you know where the risk is? Martin Fowler: Ask yourself which bits of the program would you be sca +red to change? One test I've come up with since the Refactoring book +is asking if there is any line of code that you could comment out and + the tests wouldn't fail? If so, you are either missing a test or you +'ve got an unnecessary line of code. Similarly, take any Boolean expr +ession. Could you just reverse it? What test would fail? If there's n +ot a test failing, then, you've obviously got some more tests to writ +e or some code to remove.
from http://www.artima.com/intv/testdriven4.html

It reads fine. but i got the feeling that i am going to write hella more tests if using his method. now for the experienced monks, I am interested in how much unit testing or just general testing do you do?

Replies are listed 'Best First'.
Re: when do you stop writing test?
by blue_cowdawg (Monsignor) on Feb 10, 2007 at 17:03 UTC
        in another word, how do you know that you have tested enough?

    The official answer: it depends.

    Generally in a perfect world you write enough tests that you check out all of the functionality of a system. In this context I'm using the word "system" to refer to whatever it is you are testing.

    Let me take a somewhat simple analogy: I'm buying a car from a car dealer. What am I going to test before I buy the car?

    • I'm going to look at the car and look for obvious physical defects, such as dents, loose mouldings, crooked doors, cracked glazing, etc.
    • Does it start right away ro does the dealer have to jump start it? If the dealer resorts to jumpstarting the car, that can mean either the car has been sitting on the lot for a long time (why?) or there are electrical problems with the car.
    • Take it for a road test: does the steering work? How does the car handle? How does it sound? Does it run smoothly?
    • Turn it off once I'm back at the dealer and restart it. Does it start easily?
    These are just a few things I can think of off the top of my head that are the "basic tests" before I buy a car. If I'm really serious about buying the car there are a lot more things I do to determine if the car is worth my money. But enough of this analogy.

    In a software system I look at what the software is supposed to do and put together some tests that check out the base functionality and then I go deeper with test input data sets and try and deal with the "boundary conditions." Some of what I'm looking for are as follows:

    • For a given input do I get the expected output?
    • Does error detection/correction work?
    • Something more ethereal: can I break it?

    When I'm done with my testing I hand the code off to a particular friend of mine who is notorious for finding ever inventive ways of breaking things and figure out how to "Richard Proof" my code from there.

    Having done all that testing, I still remember the old saying:

        "If you idiot proof something, they'll just release the next revision idiot."


    Peter L. Berghold -- Unix Professional
    Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
Re: when do you stop writing test?
by GrandFather (Saint) on Feb 10, 2007 at 21:58 UTC

    The short answer? When no more bugs are found and you've stopped writing code, or when the code stops being used.

    I find there are two modes for writing tests: proactive and reactive. In reactive testing you write a test when a bug is found (and before it is fixed) that fires for the buggy code and succeeds when the bug has been fixed.

    Proactive testing checks against coding issues, edge cases in the data and unusual interactions with the user interface. The data tests are generated by code inspection and knowledge of the problem domain. The interaction tests are generated by inspecting the UI and by doing "silly stuff" (idiot proofing tests).

    Proactive tests tend to be written as the code is written (before if you are writing test driven code) and reactive tests are obviously written after the code. Reactive tests may often be written to further test issues found by proactive tests.

    Writing proactive tests tends to finish when you have finished writing the code. Reactive test writing finishes when bugs stop being detected.


    DWIM is Perl's answer to Gödel
      Just to add my 2 cents to above excellent summary.

      Write the proactive tests either before writing the code or in parallel with coding. The proactive tests should test against the requirements **NOT** the implementation. The requirements doc is the bible for the proactive tests.

      Write the reactive tests when a bug/issue/broken functionality is detected and add it to the test suite so that a repeat offence is automatically caught.

      When to stop is an open-ended question. In practice, you stop when there are no more defects in the system. In reality, you stop when no more defects are found (or when the cost of testing outweighs the benefits/number of bugs found).

      Mahesh

      I think, based in GrandFather's thoughts, is you can start writing tests for all functional and not functional requirements because it is everything you know about your system at the planning moment.

      That's my two cents :-)

      Igor 'izut' Sutton
      your code, your rules.

Re: when do you stop writing test?
by perrin (Chancellor) on Feb 10, 2007 at 17:07 UTC
    Testing the external API with Mechanize is great. You should pat yourself on the back for that. Usually there are some modules that are separate from your web UI code, which are easier to test thoroughly by calling their methods directly. It's more an issue of getting more coverage, or saving some setup time on the Mech tests, than it is of needing a particular type of test. If you're wondering what you haven't tested yet, Devel::Cover can tell you.
      thanks for reminding me Devel::Cover. i have read it here http://www.perl.com/lpt/a/838

      another module to bug our sysadmin to install. <sigh>

Re: when do you stop writing test?
by ptum (Priest) on Feb 10, 2007 at 16:35 UTC

    Heh. I've generally decided that I've tested enough when I've run out of time for a project. :)

    Seriously, I think the question of 'enough' testing (particularly with respect to the idea of diminishing returns) has a lot to do with the extent to which your code is mission critical. In my current position, I do a lot of internal web applications for low numbers of internal customers, and the bar is pretty low for some of that stuff. I find it is often easier to fix a problem afterward than spend the time being ultra-rigorous up front. In a previous job, there was a high need for quality and so, in that context, I was a lot more careful about building a complete test suite, several times arriving at that happy point of not being able to conceive of any further way to test my code.

    I know that chromatic has an interest in this area, and you may want to listen carefully to any suggestions from monks who have a similar level of testing experience.

      interesting. it sorta make sense since i can relate a bit. I work at a university btw. :)

      but most of our job is code maintenance, bug fix and adding new features. that's why the manager is excited to learn that i started testing and writing doc about it on our wiki.

      lots of our code are mission critical though as we write and maintain apps that deal with course enrollment, all kind of student info etc.

      perlmonk is the first place where i look for testing knowledge. nodes from chromatic and adrianh have helped me a lot.

Re: when do you stop writing test?
by Util (Priest) on Feb 10, 2007 at 17:28 UTC

    Devel::Cover will show you, via a cool color-coded listing, all the untested paths and boolean combinations in your code.

    I rarely have time to write tests for complete 100% coverage, but I find it very helpful to have the untested sections pointed out so clearly.

      Remember that full coverage means only that your tests have touched all the code -- not that the code is doing everything right. That said, Devel::Cover can be a great way of spotting stuff you haven't tested yet.
Re: when do you stop writing test?
by jplindstrom (Monsignor) on Feb 11, 2007 at 19:05 UTC
    One of the agilistas said it very well on the XP mailing list once.

    It's based on the fact that you really start to appreciate the test suite once it has saved you from breaking the application a couple of times.

    Like when the tests told you that while you tinkered in this part of the code base a totally unrelated and completely separate feature stopped working. Only it wasn't so unrelated after all. And if not for the test no one would have realized there was a new bug in the system until days or weeks later when someone (I bet an end user too) reports that something isn't right.

    So after writing tests over a period of time you realize the test suite is your safety net that will catch you when (not if) you fall. This is doubly true when someone is new to a code base, so by writing tests you ensure that neither you nor other people screw up the code.

    Anyway, what the guy said on the mailing list was something like this:

    You continue writing tests until boredom overcomes fear.

    And I think that's very true.

    /J

Re: when do you stop writing test?
by Dervish (Friar) on Feb 12, 2007 at 02:40 UTC
    On the subject of what kind of tests do you need to write: with any new app, I generally only write a few (perhaps TOO few) tests; just enough to show that the main features of the product work. In the process of getting there, I generally find that I've tested many features that will never break again. Of course, for critical code, we also test to be sure that all of the features needed /can/ work, and that no corner cases cause a crash. But that's about it for the first round of development (at best, this would be pre-alpha quality code).

    It's later, when bugs are discovered or new features are needed, that I tend to do the most testing. I'm very big on regression tests: if something changes, show me a test that compares that change to the previous, and coverage tests: if I wrote this block of code, show me a case that either tests it, or tell me why you can't (and consider removing same).

    All of these are fairly basic ways to handle things that come up often in testing where I am. What I'd love to find is a way to test for third-party code authors that do stupid things for no reason (such as allocating a buffer every time through a loop, in C code, rather than moving the allocation outside the loop), but I haven't found any good way to write those...

Re: when do you stop writing test?
by DrHyde (Prior) on Feb 12, 2007 at 10:19 UTC
    In the real world situation that you've got, I'd write tests to cover everything that you've documented, plus corner cases and places where you know your code might be a bit dodgy, and stop when you think you've covered everything. Then when you find a bug, you write a test to cover it and then change the code to fix it - and also make sure you still pass all your old tests.

    You might find some of the code coverage modules on the CPAN useful - if they find code that's not covered when you run your tests, that's a sure sign that you need more tests.

Re: when do you stop writing test?
by Moron (Curate) on Feb 12, 2007 at 14:21 UTC
    I advocate prevention rather than cure, that is to say, spend sufficient time on code design to limit the complexity of testing and equally to make it a doddle to maintain. In my experience even the most daunting of requirements can and should be reduced to the simplest technical design that does the job.

    Or to quote CJ Date: An introduction to database systems, a computer system should reflect the simplest model capable of supporting the data rather than an effort to model the real world.

    Update: But, if you've arrived at the end of that road, willingly or not: the most popular standard for a "complete" test set seems to be the set of functionally unique (but may be arbitrarily chosen) permutative cases for each requirement specified for the system.

    -M

    Free your mind

      I agree with Moron about the simplicity of design. Poor design can lead to overly complex systems that can lead to obscure bugs. Write solid code. KIS - Keep It Simple.

      Also, many times you will not have the time to do a lot of testing because of deadlines, resource shortages, or marketplace demands, so you need to make your testing count. Test what is important first, then work your way through the rest.

      One thing that we found important is to have a non-programmer do some of the final testing (validation). Most programmers won't try hard enough to break their own code. A good validation engineer will put negative numbers in the wrong field, press 257 control-c in a row, paste chapter 7 of "War and Peace" into a field, or blast DTMF into the ear of an operator to get them to hangup or do any number of things that you would never think of. However do not rely on them for all the testing; you need to deliver to them a good working product. Their job is to find things that you didn't think of.

      -Eugene

Re: when do you stop writing test?
by skazat (Chaplain) on Feb 13, 2007 at 04:38 UTC

    Boy Howdy, I'd love to be in the position in my project of, "when do I stop making tests" :)

    I'm currently on a project that starts its (about) 7th year, only having a test suite since last September.

    The test suite barely covers any of the functionality, but the parts that it has, have become invaluable.

    Since it's sort of a weird position to be in, from here on out, I'm doing this:

    * Write a new (failing) test for any and all bugs submitted. Work on making the tests pass. Keep these tests for as long as they're applicable. Note the bug in the test script itself

    * Any new code, write tests that cover as much as possible. Tests for new code is easier, since you can write the tests at the same time as you write the code - you *think* a bit more clearer when writing the code since you sort of wonder a bit how this could be tested

    * All other code, write tests when time allows. Untested code is a big question mark in the sentence of, "Is this *really* working the way I think it is?"

    Some good times to write tests are when people submit patches that don't change any API but give some sort of performance improvement - does the patch do this without breaking functionality?

    Another good time is if you're writing new documentation. Write a test to verify that your documentation is correct.

    Hope that gives some insight ;)

     

    -justin simoni
    skazat me

    <script language="JavaScript" src="http://quotes.prolix.nu/cgi-bin/random_quote_js.pl"> </script>