I can't count the modules I've installed from CPAN, and mostly they just installed just fine with either cpan Some::Module or downloading them and doing perl Makefile.PL && make && make test && sudo make install

But some didn't. In those cases, I got errors you know along the lines

t/025_some_test............ok 1/256 + # Failed test in t/025_some_test.t at line 7235. # got: 'FOO' # expected: 'BAR' # Looks like you failed 1 test of 256. t/025_some_test............dubious + Test returned status 1 (wstat 256, 0x100) DIED. FAILED test 123

OK, fine. Invoke $EDITOR and look at line 7235. Maybe you can figure what went wrong. Maybe you can't, because the module's author has been a bit too terse for your knowledge. You could get hints from the SYNOPSIS section of the module's pod, but sometimes you just won't without grokking the entire module.

Now, as test files are valid perl files, the author could have just placed some pod along the lines of their test files. Wouldn't that be handy? You could then just say perldoc t/025_some_test.t and get a description about the test being performed, see which test does what, and an explanation of the test and maybe failure conditions. Since writing test files requires some thinking, the thoughts could just be documented by writing them as pod sections into the test files whilst composing those files. The script h2xs could write stubs for that into the initial test file.

I would advocate documenting test scripts as a First Best Practice for module authors. What do you think about it?

--shmem

update: Test files (could) give much more insight into a module's usage than can ever be placed into a module's pod without it being overburdened. That's another reason.

_($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                              /\_¯/(q    /
----------------------------  \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}

Replies are listed 'Best First'.
Re: document your test files
by davidrw (Prior) on Jul 30, 2006 at 03:28 UTC
    I think it would be sufficient (especially in your example FOO != BAR case) to use Test::More's diag() functionality.... from the POD:
    diag diag(@diagnostic_message); Prints a diagnostic message which is guaranteed not to inte +rfere with test output. Like "print" @diagnostic_message is simply concatenated together +. Handy for this sort of thing: ok( grep(/foo/, @users), "There's a foo user" ) or diag("Since there's no foo, check that /etc/bar is +set up right"); which would produce: not ok 42 - There's a foo user # Failed test 'There's a foo user' # in foo.t at line 52. # Since there's no foo, check that /etc/bar is set up r +ight.
    That way the needed "oh, this failed? here's extra info..." can be immediately apparent, and don't have to perldoc the test file or dig for comments in the test file's source ..
Re: document your test files
by Aristotle (Chancellor) on Jul 29, 2006 at 22:40 UTC

    Having some POD to give a summary of things the test file is supposed to verify is a good suggestion. (As it is, in fact, a good idea to have such a summary in any non-trival Perl source file.)

    I’d be worried to find extensive documentation for individual test cases, though. Writing individual tests so complex as to require documentation strikes me as unwise. They should be too trivial to usefully document. Instead, you should strive to give your tests good names. (I suppose you could make test names also show up in POD by keeping them there using the <<'=cut' trick.)

    If you’re using a framework like Test::Class to group tests, such groups would be more useful to document traditionally.

    Makeshifts last the longest.

      I’d be worried to find extensive documentation for individual test cases, though. Writing individual tests so complex as to require documentation strikes me as unwise. They should be too trivial to usefully document.

      That implies there are millions of them; the intrisic complexity of the code won't go down. You'll either: (a) have complex tests, (b) have millions of simple tests, hopefully with no gaps in them, (c) have to write code to autogenerate tests, or (d) leave important chucks of code untested.

      I don't think there's any silver bullet there. I do know that in most cases, self-documenting code isn't.

Re: document your test files
by creamygoodness (Curate) on Jul 30, 2006 at 01:59 UTC

    I disagree, though not strongly. Each test has a message associated with it, and IMO best practice is for the author to strive to make those messages suffice as documentation, falling back to comments only when necessary.

    --
    Marvin Humphrey
    Rectangular Research ― http://www.rectangular.com
      The text tells you what is being tested, not how it is being tested. I quite often have tests which rely on the results of previous tests, or which perform convoluted backflips to massage my objects' state into something needed for the next test. This sort of thing, I have realised, must be commented. Not because it's useful to you, as you should be relying on the module's documentation and not its tests to see what it does and how to use it. No, the tests should be commented for the same reason that the module being tested should have comments - because it makes it easier for me to fix bugs or add functionality later.

      Expect me to finish fixing all my test code ... never.

      I disagree, though not strongly. Each test has a message associated with it ...

      So you're not disagreeing you're agreeing :-)

      I suspect that the OP is referring to the style of tests which are common in distributions that predate Test::Simple and Test::More. Back in the 'old days', a test assertion looked like this:

      ok(1)

      I was working with just such a test file this week and I would dearly have loved to see a message associated with each test.

Re: document your test files
by adrianh (Chancellor) on Jul 30, 2006 at 11:12 UTC
    Now, as test files are valid perl files, the author could have just placed some pod along the lines of their test files. Wouldn't that be handy? You could then just say perldoc t/025_some_test.t and get a description about the test being performed, see which test does what, and an explanation of the test and maybe failure conditions.

    I'd tend to disagree myself. Slightly.

    I think the time spent writing the POD would be better spent adding appropriate diagnostic messages to the test assertions, or factoring out common code in the test suite into sensible stand alone test assertions.

Re: document your test files
by gellyfish (Monsignor) on Jul 30, 2006 at 08:08 UTC

    I don't actually think that this would be useful or particularly desirable. The output of the tests should be useful to the author to diagnose the cause of a test failure on someone elses system (after all they wouldn't have released with failing tests, would they?) The output also most certainly should give some clear indication where a failure is caused by some problem external to the module code itself (such as an inability to access a database or network service) and which it would be expected the person installing the module can fix. Likewise skipped tests should output a clear reason why they were skipped, and TODO tests probably should be commented for the benefit of someone implementing the missing functionality (and possibly to help explain "unexpectedly passed" tests.) I might even advocate the heavy commenting of text fixtures where particulary hairy or non-portable code is being used to test some feature. Though of course code in tests should be as simple as possible in order to avoid bugs in the tests themselves - I would say that a significant proportion of test failures are due to faulty assumptions in the tests themselves.

    Given all of the above I don't see the point in making formal user level documentation for the tests, and by my standards I would consider it Bad Practice to be taking a test suite as some kind of additional documentation or example of a modules usage: it is likely that undocumented or purely internal functionality will be tested in order to get better granularity, and it also likely that, in a certain class of application, the tests will need to do things that are unlikely to be necessary in real-life usage.

    If the documentation and/or examples of a module are weak then this is what should be fixed. Providing user level documentation for the tests simply adds an otherwise unnecessary burden on the author.

    /J\

Re: document your test files
by eyepopslikeamosquito (Archbishop) on Jul 30, 2006 at 01:18 UTC

    Agreed. The test suite should be a first class part of a module's documentation. Being executable ensures that it is always kept up to date.

    Having a nicely commented test for each example given in the module's documentation ensures the examples actually work and allows the test suite to be usefully browsed as tutorial material.

Re: document your test files
by n00dles (Novice) on Jul 30, 2006 at 14:40 UTC
    I think this is a good idea, Im mainly thinking about noob's like myself, I have tryed to install modules which had fatal test errors witch left me totally in the dark.

    It would not only a handy aid for debugging but a learning tool also... But at the end of the day how you go about this really should be up to the coder, if they find it a real chore they're not going to do it.

Re: document your test files
by rvosa (Curate) on Aug 02, 2006 at 20:30 UTC
    I think this is a good idea. Most users will want many examples of how to use your module, and the SYNOPSIS is usually too terse (or worse, outdated). The test suite can then be seen as alternative documentation.