in reply to Strange CPANTS report

First, as another monk has noted, you have confused the perl.cpan.testers newsgroup/mailing list with CPANTS. The latter is an attempt (and a very fallible one, IMHO) to measure certain characteristics of distributions uploaded to CPAN. The former gives you important feedback on how your distribution tests in different operating environments -- environments being defined as a combination of the OS and the perl version being used for testing.

Second, I recently spent several weeks fine-tuning a new version of the ExtUtils::ModuleMaker distribution. I knew the code was valid, but I was trying to be much more rigorous in my testing than in my earlier CPAN distributions and, as a result, I had considerably greater problems getting the tests right. When I got FAILs from the cpan.testers, it forced me to think very carefully with respect to how my tests were interacting with the tester's environment.

Third, of all the CPAN testers who posted my reports on ExtUtils::ModuleMaker (and its new companion, ExtUtils::ModuleMaker:PBP), the most helpful of all of them was imacat -- the tester in Taiwan who sent you a fail. Why was she the most helpful? (A) In the immediate sense, because, upon my request, she sent me the output of 'prove -vb t/mytest.t' several times. (B) Because her testing environment (at least on her linux box) is more pristine than that of other cpan.testers, so it gives you a more accurate reading as to whether you have loaded all the prerequisites. Example: On my own boxes and those of many testers, Module::Build is installed even though its not (yet) core. Only a testing box without Module::Build will tell you when your code fails a 'require Module::Build' test.

All of which is to say: Treat a FAIL from imacat seriously, even if that test is receiving PASSes from other testers.

Fourth, while I would recommend asking a cpan.tester for the output of 'prove -vb t/myscript.t' for any failing test, it's not going to be very helpful in your case, because the messages (or 'names' or 'labels' or whatever they're calling them this week) that you have included with your tests are overly terse and not self-documenting (IMO). For example, opening up your first failing test file, t/02-newick.t, I see that the third test is:

ok( Bio::Phylo::IO->parse( -file => 'tree.dnd', -format => 'newick' ), + '3 parse' );

A message such as 3 parse is probably meaningful to you, the module's author. But it gives me no clue as to what was supposed to happen with this test. What is supposed to be the result of the parse call? Creation of a file? Verification of a new file's existence? Verification that a new file contains particular content?

Similarly, in the next file in which you are encountering problems, t/14-nexus.t, your test labels simply read test 1, test 2 and so on. No diagnostic help there.

Okay, enough ranting. Let me advance a hypothesis as to what's going wrong. Let's take t/02-newick.t as an example. In this test file, you print to a file called tree.dnd, which, since no path information is provided, is presumably created in the same directory from which the tests are being run -- most likely Bio-Phylo-0.07. More to the point, you are not creating this file in a temporary directory such as would be provided by File::Temp and would be disposed of once the test file has finished.

I note also that while your version 0.04 of Bio::Phylo passed on this box, your version 0.05 did not. In fact, it failed at the same point as 0.07. Based on my own experience with testing on that box, I would explore the possibility that your FAIL on 0.05 left a file on that box which is continuing to "pollute" your testing environment.

Granted, you don't have access to the tester's box. But if, after running the tests on your own box, you notice that files created by the testing process are lying around, then you may be leaving such files on the tester's box as well, and that could be the source of the FAILs.

Whether or not that hypothesis is confirmed, I recommend more descriptive test messages and use of File::Temp to create directories which hold files created during the testing process.

Jim Keenan

Update an hour later:

I'm less confident than earlier about my hypothesis. Presumably, any files created during the process of testing Bio-Phylo-0.04 are distinct from those created while testing 0.05 or 0.07.

So let me suggest another approach. Since 0.04 passed on imacat's box while 0.05 and 0.07 failed, you should look at the differences between 0.04 and 0.05, taking care to distinguish between changes in the modules and changes in the test suite. A quick application of the CPAN grep tool (http://search.cpan.org/diff?from=Bio-Phylo-0.04&to=Bio-Phylo-0.05) suggests that you did some heavy surgery on your modules between those two versions. In particular, lib/Bio/Phylo/IO.pm, the package that exports the problematic parse function, was not present in 0.04 but does appear in 0.05. Perhaps that's where you should focus.

jimk

Replies are listed 'Best First'.
Re^2: Strange CPANTS report
by rvosa (Curate) on Oct 04, 2005 at 11:14 UTC
    imacat has replied that the garbled error messages were:
    "Inappropriate ioctl for device"
    I'm pretty sure Jim is right about the dodgy way test input files are read and written, so I'll look into that, but can someone tell me how and why "Inappropriate ioctl for device" might crop up during a test? I'm normally on Win32, so I need to do some reading op on ioctl.
      Googling for that phrase I got a lot of entries on many different systems, few or none of them Perl-related. So I think it's just a message that prints out as part of the FAIL and doesn't say anything in particular about your problem.

      And this is supported by the fact that the phrase is present in all my FAIL reports from imacat's Linux box. Example: http://www.nntp.perl.org/group/perl.cpan.testers/254139. In my case, this message had nothing to do with the problems in my test suite; those lay in the test suite itself. My hunch is that it's the same for you.

      jimk

        I looked at the testers report. Most of the Chinese error messages are about 'inappropriate iotcl for the device' but the first is about 'file or directory not found'. That is, not all the Chinese messages are the same. If the iotcl message comes with all of imacat's FAILS, then perhaps the problem with this particular test is a missing file.
Re^2: Strange CPANTS report
by rvosa (Curate) on Oct 04, 2005 at 10:28 UTC
    Wow, good points. Thanks!