pryrt has asked for the wisdom of the Perl Monks concerning the following question:
How do you go about debugging failures from CPAN Testers when your own configurations are not failing? I'd like advice, both in general, and anything you see in my specific examples below.
For example, this test matrix has a bunch of failures -- but when I test on my machines, I cannot replicate the errors they are getting.
Before releasing, I tested on a couple of different versions I have access to (strawberry perl 5.24.0_64 on Win7 and an ancient CentOS 4.6 linux 2.6.9-55 with perl 5.8.5), and neither failed my test suite. And since I've seen the CPAN Testers failures, I've started increasing my berrybrew installations to improve version coverage -- but so far, they've all passed, even when they've been on Perl versions that failed in the linux column.
After I've exhausted available Strawberry installations, I will probably grab one of my linux virtual machines and start increasing perlbrew installations, and run through as many as I can there (I cannot install perlbrew or other local perls on the CentOS machine I noted, due to disk restrictions). But even with trying a new slew of versions, I cannot guarantee that I'll see the same failures that CPAN Testers is showing me.
I know where I'll be looking for the specific errors: my expected values are wrong; the expected values were being generated by functions I thought were fully tested earlier in my test suite, so I'll have to look into that some more, and also see if maybe I should independently generate the expected values.
But if I cannot replicate the exact failures from CPAN Testers, it's going to be harder to know I've solved the problem. When doing my last release to add features, I ended up submitting beta versions to CPAN, with extra debug printing, and waiting overnight while the CPAN Testers ran, then basing my fixes on changes in those results. But that's a rather slow debug process... and I noticed that every submission, I was getting fewer results from TESTERS: I think some of those auto-testers have some sort of submission limits, or otherwise remember that a particular module fails and stops testing new versions.
Any advice, generic or specific, would be welcome.
|
|---|