in reply to Re^3: How can I write Test::Finished? (auto count)
in thread How can I write Test::Finished?

I'm not sure why replacing that with "make plan; make test" (or perhaps just "make plan test") is such a hardship for you.

It's a hardship because I'll forget to do it. My fingers know how to type 'make test' all by themselves; they don't even ask my spinal column for guidence.

But it sounds like to don't care at all about the types of failures that plans are meant to catch

That's almost true. I certainly don't care enough to maintain a magic number at the top of all my test files. But I do care enough to bang out a module that I can add once and forget about. It sounds like some combination of a source filter and fork() will do the trick, so I think I'll give that a try.

-sam

  • Comment on Re^4: How can I write Test::Finished? (auto count)

Replies are listed 'Best First'.
Re^5: How can I write Test::Finished? (auto count)
by tye (Sage) on Jun 22, 2004 at 06:30 UTC
    But I do care enough to bang out a module that I can add once and forget about. It sounds like some combination of a source filter and fork() will do the trick

    Note that that won't catch all of the problems that test plans catch. I think it catches the least likely of those problems [some code causing an early exit(0)].

    - tye        

      It will catch any problem that causes the test script to not run to completion. If you're thinking about problems with shortened loops, I generally already test them with something like:

         is($loop_count, 10, "Loop ran 10 times");

      That's a case of manual counting that makes sense to me, and it's close enough to the code it references to be easy to maintain. Plus, when it fails I know exactly which loop ran short. When you get a failure from a global count you don't have a clue as to where in the test file to look.

      But I'm just guessing here. What problems are you suggesting?

      -sam

        It will catch any problem that causes the test script to not run to completion.

        Most of those are already caught. So the protection that it adds is against any problem that causes the test script to not run to completion but not report a(n uncommented) failure and also exit with a 0 status result (I presume -- I haven't personally checked resently that Perl tests notice such problems).

        The types of errors that I think are more common than those are programming errors in the test script that cause tests to be run too few or too many times. For example, something that causes a test to be skipped or that causes a problematic test that isn't supposed to be run with this release to be run anyway.

        When I think of loops I've written in test scripts, the cases I come up with usually have the number of iterations determined by some data in the module and there are usually more than one loop over the same items. So I don't find that testing a count near each of the loops to be putting the count near to the data that determines it. But I'm glad that your technique works for you.

        Stepping further back, the "need" for the plan count is a symptom that the test output is too regular and boring. I'd rather have the test script output much more interesting values that are compared against the "known good" output that is included with the test script (in a separate file). All of the work to write modules to auto increment test numbers and to associate test names with test numbers, etc. is to overcome this flaw in the design of traditional Perl test scripts, IMO.

        I'd rather define a simple output format similar to here-docs where a line of output declares a test's name and includes one of 1) a notice that the test was skipped, 2) the simple results of the test (such as a count or fairly short string), or 3) the terminator that marks the end of the output for this specific test (when on a line by itself). Then the harness could still report how many tests failed or were skipped, but the person investigating the test failure would be left with a text file containing the expected output and another containing the actual output, which I think would usually be a ton more helpful than just getting a list of integers.

        Then the hokey "plan" becomes pretty useless. (:

        - tye