in reply to Using Test::More to make sense of documentation

Test::More is typically used to compare two thingies, and report whether they are equal or not. And that's what you seem to be doing in your post.

IMO, that's a hard way of learning things. I rather want to see what is calculated, not just whether what was calculated actually matches what I think it might calculate (specially when you are learning-by-trying, by the time you can a reasonable prediction of what the output will be, you're almost done learning).

For regexes for instance, inspecting $& is far more informative than guessing what $1 will be. And, for failures, I can learn far more from the output of use re "debug"; that I can from Test::More saying not ok.

Replies are listed 'Best First'.
Re^2: Using Test::More to make sense of documentation
by ELISHEVA (Prior) on May 01, 2009 at 13:54 UTC

    The goals of experimentation are different than the goals of testing. In test mode, you propose an output and verify that the actual output is "as expected". This, as you note, takes a lot of understanding of the module.

    However, in experimental mode, you start with a goal and consider various possible inputs and incantations that might result in the desired output. It is exploratory, not predictive. For failures, Test::More::is prints out a lot more than just "not ok". In fact, it prints out the right result, so it can be a good way to "see what happens".

    An alternative see-what-happens technique is the command line or a REPL (read-eval-print-loop). Both are very good tools for small clarifications. REPL's don't run into quoting problems like the command line does. However, both have several other limitations: repeated setup, lack of annotations, inability to repeat what you did last week, inability to repeat en-mass a batch of trials with different inputs, and so on.

    $& - agreed. But that wasn't the point of the example and I apologize if it wasn't clear. The point was really to show

    • how to test an incantation where the goal isn't simply the output of a subroutine (via eval)
    • an experimental approach. The intent was to give a sampling of the kinds of mistakes one might make while feeling one's way to understanding how capturing and back-references work.
    • being able to keep an annotated history of what does and doesn't work. A command line or REPL provides some short-term history, but no annotations you can go back to weeks later.
    • the ability to repeat experiments en-masse for new input by encapsulating a set of tests in a subroutine.

    Best, beth

      I've sometimes used the inverse of this (reading the tests to understand the intended usage), but had never thought of it as an exploratory technique. Further consideration suggests that these experimental test files might be useful to the author, in at least two cases:

      1. when reporting a code bug or patch.
      2. when reporting a documentation bug or patch.

      The first of these is recommended practice, but the second might also be useful. When receiving a documentation patch or question, it's sometimes hard to tell why the person asking the question doesn't understand my <sarcasm>perfectly clear and understandable</sarcasm> documentation. I can see the test file containing your experiments being very useful in making assumptions and misconceptions slightly clearer.

      While some might argue that this isn't the best way to learn an interface, I think you have added another tool to my learning toolbox.

      G. Wade