nglenn has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks,

I have been working on some legacy code that involves doing bit-level manipulation in XS code. The distribution is Algorithm::AM, and here are the current cpan testers results. They are rather boggling to me. I passed tests on Windows 64-bit Perl 5.16 before uploading, so I don't see any particular pattern to the failures here (besides "mostly fail"). I see passing and failing tests on single and multi-threaded, *nix and Windows, and multiple versions of Perl. The amount of information in a single report is difficult to hold in my head for comparison with other reports.

I am looking for a way to parse these test results into a hash or something so that I can compare them all. Maybe make a feature chart or run a decision tree classifier on it to try and pinpoint the environment parameters that make the tests fail (because I suspect that it may be certain combinations of parameters that cause it to fail). Are there any modules out there for parsing out these test reports? The report is written via this code.

Alternatively, of course, if you happen to have the experience to immediately see any pattern in these reports then please tell me what you see.

Replies are listed 'Best First'.
Re: parsing and comparing test reports
by davido (Cardinal) on Feb 09, 2014 at 01:28 UTC

    I should expect to see it show up here: CPAN Testers Analysis. I've found those stats very useful. Usually it takes between 1 and 4 days for an uploaded distribution to show up in the analysis.


    Dave

      Hey, now that is a cool site. Thanks for pointing me to it.
Re: parsing and comparing test reports
by Anonymous Monk on Feb 09, 2014 at 00:52 UTC

    OTOH, https://metacpan.org/pod/CPAN::Testers::Report, CPAN::Testers::ParseReport - parse reports to www.cpantesters.org from various sources, CPAN::Testers::WWW::Reports - The CPAN Testers Reports website.

    The amount of information in a single report is difficult to hold in my head for comparison with other reports.

    Why even try? Pick any one report you think you recognize as something you understand even a little, and based on your understanding , write/add an extra verbose test to the distribution, that verifies all the assumptions you can think your program/module makes use of, bump the version number, and push to CPAN

    Wait for testers to test it so that your test gives you all the info you need to :D

    1) to write more test 2) or to fix some stuff :)

    I'd take a look but http://static.cpantesters.org/distro/A/Algorithm-AM.html is timing out on me

      Thanks! I'll be trying this... Maybe this link will work better for you? link Thanks for even considering doing that. I'm lost in a report jungle.

        Maybe this link will work better for you? link Thanks for even considering doing that. I'm lost in a report jungle.

        :) No problem ... FWIW that part of the internet is accessible to me now

        The essence of all the reports is the same, your test is checking for some string having 69.231 and it doesn't have that

        My opinion there is probably nothing more to be learned from these reports (no real point in parsing them), failing and passing machines are fairly identical, the numbers reported are also the same except for the percentage

        So the problem is with the printing of the report

        Either you're giving sprintf what it doesn't like (wrong format), or sprintf implementation is broken on machine (unlikely but not impossible )

        So if I were you, next move would be to add  warn "\n\n", Data::Dump::pp( \%formats, \%stats  ), "\n\n"; around at   line 1700 "calculate results" in Algorithm/AM.pm

        and push it to cpan


        The essence
        bad good
        Number of data items: 5 Total Excluded: 5 Nulls: exclude Gang: squared Number of active variables: 3 Statistical Summary e 4 0.000% r 9 0.000% ---------------------------------------- 340282366920938463463374607431768211455G
        Number of data items: 5 Total Excluded: 5 Nulls: exclude Gang: squared Number of active variables: 3 Statistical Summary e 4 30.769% r 9 69.231% -- 13

        # Failed test 'Chapter 3 data, counting pointers' # at t/01-classify.t line 29. # got: "Test items left: 1\x{0a}Time: 22:05:32\x{0a}3 1 2\x{0 +a}0/1 22:05"... # length: 525 # doesn't match '(?^:e\s+4\s+30.769%\v+r\s+9\s+69.231%)'

        These are the reports that are essentially verbatim


        The rest you can ignore

        This one had extra fail due to dzil stuff (ignore, probably outdated zilla)


        This one is probably identical except some of the fail test output is missing, probably tester has old version of Test::More , maybe missing IPC::Run,


        This one had old perl , so ignore this report

        as its purely testers problem (probably old version of testers toolchain)

        If you want to do something you could add an extra key (MIN_PERL_VERSION) to writemakefile, see http://wiki.cpantesters.org/wiki/CPANAuthorNotes