in reply to Re: parsing and comparing test reports
in thread parsing and comparing test reports

Thanks! I'll be trying this... Maybe this link will work better for you? link Thanks for even considering doing that. I'm lost in a report jungle.
  • Comment on Re^2: parsing and comparing test reports

Replies are listed 'Best First'.
Re^3: parsing and comparing test reports
by Anonymous Monk on Feb 09, 2014 at 02:13 UTC

    Maybe this link will work better for you? link Thanks for even considering doing that. I'm lost in a report jungle.

    :) No problem ... FWIW that part of the internet is accessible to me now

    The essence of all the reports is the same, your test is checking for some string having 69.231 and it doesn't have that

    My opinion there is probably nothing more to be learned from these reports (no real point in parsing them), failing and passing machines are fairly identical, the numbers reported are also the same except for the percentage

    So the problem is with the printing of the report

    Either you're giving sprintf what it doesn't like (wrong format), or sprintf implementation is broken on machine (unlikely but not impossible )

    So if I were you, next move would be to add  warn "\n\n", Data::Dump::pp( \%formats, \%stats  ), "\n\n"; around at   line 1700 "calculate results" in Algorithm/AM.pm

    and push it to cpan


    The essence
    bad good
    Number of data items: 5 Total Excluded: 5 Nulls: exclude Gang: squared Number of active variables: 3 Statistical Summary e 4 0.000% r 9 0.000% ---------------------------------------- 340282366920938463463374607431768211455G
    Number of data items: 5 Total Excluded: 5 Nulls: exclude Gang: squared Number of active variables: 3 Statistical Summary e 4 30.769% r 9 69.231% -- 13

    # Failed test 'Chapter 3 data, counting pointers' # at t/01-classify.t line 29. # got: "Test items left: 1\x{0a}Time: 22:05:32\x{0a}3 1 2\x{0 +a}0/1 22:05"... # length: 525 # doesn't match '(?^:e\s+4\s+30.769%\v+r\s+9\s+69.231%)'

    These are the reports that are essentially verbatim


    The rest you can ignore

    This one had extra fail due to dzil stuff (ignore, probably outdated zilla)


    This one is probably identical except some of the fail test output is missing, probably tester has old version of Test::More , maybe missing IPC::Run,


    This one had old perl , so ignore this report

    as its purely testers problem (probably old version of testers toolchain)

    If you want to do something you could add an extra key (MIN_PERL_VERSION) to writemakefile, see http://wiki.cpantesters.org/wiki/CPANAuthorNotes

      Thanks for looking at all of that for me! I am glad to have someone tell me not to bother doing a bunch of extra work.

      I will try doing that. Actually, maybe I'll use Devel::Peek for it. The thing about that huge number is that it's a custom big integer. It is created in AM.xs (grandtotal), and is an array of longs. In the XS normalize function it has a string and double representation added to it, and I suspected that it would be susceptible to overflow or something if the datatypes have the wrong bit size. Maybe I'll follow your advice on printing my suspicions and just add a ton of print statements in the XS, too.

      Not sure if you'll even get this message since you were signed in anonymously, but I just wanted to say thanks. I uploaded 8 releases to CPAN, each with various printing statements, and found that the problem was an uninitialized array element, which may or may not be garbage in C. So there was a bug on every system, but it just showed up in only some systems. We're all very happy to have found it!