If you follow Perl's QA list, you know that Schwern and co. have this crazy idea to create a TAP::Harness. This is great as the venerable Test-Harness is showing its age. Part of this process is to write a new parser for TAP and that's where I could use your help.

I've finally gotten TAPx::Parser solid enough to upload it to the CPAN. At this point, the parser seems fairly robust, but I could use help with this being run against numerous test suites. Note that TAPx::Parser is not an official parser which the Perl QA folks have endorsed. I'm hoping to get it robust enough that they consider this option (in large part because there is actually very little work on the "official" TAP parser).

In the distribution, you'll find a program called tprove in the examples/ directory. That's a simplistic test harness which will run your test files and spit out results. If you read the code of tprove and the docs for TAPx::Parser, it should be fairly clear how to play with this to get yourself colored test output, hook it into a GUI, set off audible alarms if tests fail or whatever you want.

The primary limitation of tprove at this point is that it must be run in the same directory which contains your lib/ directory for the modules you wish to test and you must hard-code the path to TAPx-Parser unless you choose to install it.

One way to check it out is to simply download the tarball, unpack it, cd into TAPx-Parser-0.11/ and run this command:

TAPx-Parser-0.11 $ perl -Ilib examples/tprove

At the end of the test run, you should see something like this:

ok 64 - ... and junk should parse correctly ok 65 - ... and the second test should parse correctly ok 66 - ... and comments should parse correctly ok 67 - ... and the third test should parse correctly ok 68 - ... and the fourth test should parse correctly ok 69 - ... and fifth test should parse correctly ok 70 - ... and we should have no parse errors ok Testing t/pod-coverage.t t/pod-coverage.t...... 1..0 # Skip Test::Pod::Coverage 1.04 required for testing POD coverage ok Testing t/pod.t t/pod.t...... 1..0 # Skip Test::Pod 1.14 required for testing POD ok Tests run: 254 Passed: 254 Failed: 0 Errors: 0

Errors refers to parse errors and if there are any, a summary will be displayed. Reproducible parse errors are the things I am particularly anxious to see.

Note: As of this writing, the docs are up on the CPAN site but you can't actually seem to download the package. That should be cleared up soon. In the meantime, you can download TAPx::Parser from my Web site.

Cheers,
Ovid

New address of my CGI Course.

Replies are listed 'Best First'.
Re: I need your help with parsing TAP output
by Tanktalus (Canon) on Jul 29, 2006 at 23:55 UTC

    After this came up in the CB, and jkva was kind enough to point to this thread, I saw it was basically unanswered, despite being front-paged. Looking at this, I have a couple of questions.

    1. What is wrong with Test::Harness that needed fixing? I'm not saying that there isn't anything, only that I don't have the experience with Test::Harness that has shown much that I wanted to do that wasn't available. Except have it figure out my plan for me - counting isn't fun.
    2. What is it exactly that you are asking of other monks? Are you asking us to run tprove with our existing tests? Is there any rewriting that we might have to do, or is this completely compatable with Test::More and its ilk? It's not entirely clear to me what you're asking of us, and maybe others wondered the same thing.
    Thanks,

      Thanks for the feedback. I'll tackle the second question first: I was hoping monks would simple run the tprove program against some test suites let me know if there were any parse errors. Fortunately, I have a newer version of tprove which should be out soon and will make this easier to do. No coding's required, but it anyone wanted to, I wouldn't complain!

      As for the problems with Test::Harness, those are documented here. Even the creators and maintainers of Test::Harness agree that it has to go. Basically, we're talking about code that has evolved over almost 20 years into a big ball of mud. This has led to confused responsibilities within the systems and made it very hard to extend it to add new features.

      What do you want to do with that test output? Want colored tests? Want to only report failures? Want to throw them up on a GUI like jUnit and friends? Want to transform them into something your language's test harness can understand? Want them as XML docs? Want to email them? Want to have them run as your wallpaper? Forget it. It turns out that you can do those things, but it's very hard to get right. Attempts to do those things invariably turn out to be buggy and this is because Test::Harness just can't do what we want but simply refactoring Test::Harness becomes difficult because so much code depends upon its globals or other quirks. Heck, if you don't believe me, just look at the bug reports and try to fix a few of those :)

      Cheers,
      Ovid

      New address of my CGI Course.

        Hi Ovid. There are a few implementations of TAP out there in C (such as libtap). How about creating a backend to one using XS?