File and line number makes no sense in the context of a failed test.
Oh contrare. I'd at least have a starting point, even in this somewhat contrived example.
In more normal cases, about 95% of those test scripts I've looked at, that consist of long linear lists of unnumbered ok()s and nok()s, having the line number of the failing test would save me from have to to play that most rediculous of games--count the tests. Are they numbered from zero or one? Does a TODO count or not? Do tests that exists inside runtime conditional if blocks count if the runtime condition fails? If no, how can I know whether that runtime condition was true of false? Etc.
Of course, in this case I'd need other information too. But then in this case, the test number would be of no direct benefit either. In this case I'd have to modify the .t file to print out a sorted list of the keys to %tests at runtime, as there would be no other way to work out which test related to test N.
Oh damn! But then tracing stuff out from with in a test script is a no-no, because the test tools usurp STDOUT and STDERR for their own purposes, taking away the single most useful, and most used, debugging facility known to programmer kind: print.
And there you have it, todays number one reason I do not use these artificial, overengineered, maniacally OO test tools. They make debugging and tracing the test script 10 times harder than doing so for the scripts they are meant to test.
They are an unbelievably blunt instrument, who's basic purpose is to display and count the numbers of boolean yeses and nos. To do this simple function
And all of this so as to produce a bunch of 'pretty pictures and statistics' that I have no use for and have to use yet another layer (the test harness) to sift and filter to produce the only statistic I am interested in.
For all the world this reminds me of those food ads and packaging that proclaim to world; "Product X is 95% fat free!". Ug. You mean that 5% of that crap is fat?
To date, the best testing tool I've seen available is Smart::Comments. It's require, assert, ensure, insist, check, confirm, & verify methods are amazingly simple, amazingly powerful.
This is immediate and accurate.
I do not have to modify the user supplied testcase in any way. And that is the holy grail of testing. Run the user script, unmodified on my system with debugging enabled within my modules only.
And if the users testcase has a bunch of complex dependancies that I do not or cannot have, I can instruct the user to go into his copy of my modules and delete 1 character and all of my tests are enabled. He can then run his testcase in his environment and supply the output to me, and I can see exactly what went on.
Smart::Comments is the single, most useful, and most underrated module that theDamian (and possibly anyone) has yet posted to CPAN. I recognised the usefulness of the concept long ago when I came across Devel::StealthDebug, which may or may not have been the inspiration for Smart::Comments. In use, the former proved to be somewhat flaky, but theDamian has worked his usual magic with the concept (whether it was the inspiration for it or not), and come up with a real, and as yet unrecognised, winner.
To achieve the perfect test harness,
The information, as logged for failure, would also be logged for success in this mode of operation.
Why haven't I written it yet? Because I keep hoping that Perl6, oops, Perl 6 is 'just around the corner', and I'm hoping that Smart::Comments will be built-in.
Of course, a few additional modules wouldn't go amiss. Smart::Comments::DeepCompare, Smart::Comments::LieToTheCaller and few others, but mostly it's all right there.
In reply to Re^5: Need advice on test output
by BrowserUk
in thread Need advice on test output
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |