But don't you see the dichotomy here? Because the test tools don't capture the line numbers, I have to add comments to allow me to get back to the line numbers.
Not only does that create extra work, thinking up appropriate comments; typing them etc. It also create a bunch of knock on problems. For example,
This means you have to re-run the individual failing tests scripts (using that syntax that I can never recall), in order to get the full output (which in the process pushes my useful information of the top of my buffer).
If the line numbers where available, these could be added to the summary list without any great problem.
t/bar.t 4 1024 13 4 2 6-8
could become:
t/bar.t 13 4 2(27) 6(54)-8(77)
You couldn't easily do the same with the comments.
If the line numbers where available in the test harness summary data as above, then I could see myself writing a short editor macro (in my fairly macro-challenged preferred editor) to run the test harness, capture & parse the summary output and use it to step through the failing tests.
Doing something similar using the current setup would involve, running the test harness, capturing & parsing the summary screen; re-running the failing test script individually; capturing and parsing it's output; Extracting the failing test case comments (if the author has provided them!); loading the test script; searching through for the comment; and hoping that it is unique.
I'm not saying this isn't doable in something like emacs, but it's just so much extra work that isn't guaranteed to work. Line numbers in files are unique by definition. Comments might be or they might not.
And remember, either way, all of this only gets me back to the place in the test script where the test failed. I've still got to get from there back to the code that it tests. And that could be literally anywhere. If the test that tests the code is in the same file and in rough proximity to the code being tested; and the failing test output incorporates the filename and line number; then my simple editor macro can take me straight there in one jump.
There is simply no way to do this with the current system. The best you can do is see what apis are being called in the failing test and then grep all the source files and hope you turn up a likely looking candidate. This bad enough in a moderately complex suite of your own writing, but backing tracking in a complex test suite for code you didn't write, to the failing code is nigh impossible.
All those extra steps and dis-contiguous paths just throw away the beauty of the edit/run loop that makes Perl (and other dynamic languages) such a delight to program.
In reply to Re^7: Need advice on test output
by BrowserUk
in thread Need advice on test output
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |