Failed Test Stat Wstat Total Fail List of Failed ---------------------------------------------------------------------- +--------- t/bar.t 4 1024 13 4 2 6-8 t/foo.t 1 256 10 1 5 (1 subtest UNEXPECTEDLY SUCCEEDED). Failed 2/3 test scripts. 5/33 subtests failed. Files=3, Tests=33, 0 wallclock secs ( 0.10 cusr + 0.01 csys = 0.11 +CPU) Failed 2/3 test programs. 5/33 subtests failed.
If a test is designed to fail, then does it get reported as a failure when it does fail? Or is that an 'EXPECTED FAILURE'?
If it's not important enough to tell me which one, why is it important enough to bother mentioning it at all?
And if they are the same thing, why is it neccesary to give me the same information twice?
Actually, 3 times. "Files=3, Tests=33, " is just a subset of the same information above and below it.
Is there any other use for that timing information?
Of course, you'll be taking my thoughts on this with a very large pinch of salt as I do not use these tools. The above are some of the minor reasons why not.
Much more important is that there are exactly two behaviours I need from a test harness.
"Nothing failed" or "All tests passed".
I have no problem with a one line, in place progress indicator ("\r..."), but it should not fill my screen buffer with redundant "thats ok and that ok and thats ok" messages. I use my screen buffer to remember things I've just done: the results of compile attempts, greps etc.
Verbose output that tells me nothing useful, whilst pushing useful information off the top of my buffer is really annoying. Yes, I could redirect it to null, but then I won't see the useful stuff when something fails.
Converting 5/10 into a running percentage serves no purpose. A running percentage is only useful if it will allow me to predict how much longer the process will take. As the test harness doesn't know how many tests it will encounter up front, much less how long they will take, a percentage is just a meaningless number.
If I really want this summary information, or other verbose information, (say because the tests are being run overnight by a scheduler and I'd like to see the summary information in the morning), I have no problem adding a command line switch (say -V or -V n) to obtain that information when I need it.
Preferably, it should tell me which source file/linenumber (not test file) I need to look at, but the entire architecture of the test tools just does not allow this, which is why I will continue to embed my tests in the file under test.
In reply to Re^3: Need advice on test output
by BrowserUk
in thread Need advice on test output
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |