nglenn has asked for the wisdom of the Perl Monks concerning the following question:
I have been working on some legacy code that involves doing bit-level manipulation in XS code. The distribution is Algorithm::AM, and here are the current cpan testers results. They are rather boggling to me. I passed tests on Windows 64-bit Perl 5.16 before uploading, so I don't see any particular pattern to the failures here (besides "mostly fail"). I see passing and failing tests on single and multi-threaded, *nix and Windows, and multiple versions of Perl. The amount of information in a single report is difficult to hold in my head for comparison with other reports.
I am looking for a way to parse these test results into a hash or something so that I can compare them all. Maybe make a feature chart or run a decision tree classifier on it to try and pinpoint the environment parameters that make the tests fail (because I suspect that it may be certain combinations of parameters that cause it to fail). Are there any modules out there for parsing out these test reports? The report is written via this code.
Alternatively, of course, if you happen to have the experience to immediately see any pattern in these reports then please tell me what you see.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: parsing and comparing test reports
by davido (Cardinal) on Feb 09, 2014 at 01:28 UTC | |
by Anonymous Monk on Feb 09, 2014 at 02:24 UTC | |
by nglenn (Beadle) on Feb 09, 2014 at 02:54 UTC | |
|
Re: parsing and comparing test reports
by Anonymous Monk on Feb 09, 2014 at 00:52 UTC | |
by nglenn (Beadle) on Feb 09, 2014 at 01:06 UTC | |
by Anonymous Monk on Feb 09, 2014 at 02:13 UTC | |
by nglenn (Beadle) on Feb 09, 2014 at 03:11 UTC | |
by nglenn (Beadle) on Feb 11, 2014 at 23:31 UTC |