in reply to Re^3: Building Win32::GuiTest for perl 5.14 or higher (Bad tests!)
in thread Building Win32::GuiTest for perl 5.14 or higher

The author should bail out (or possibly die at Makefile.PL stage) if Win32/cygwin isn't available.

The trouble with that theory is that then, whenever a potential user checks the CPAN Tester grid, it looks to the uninitiated like a bad module because of the preponderance of RED fails. This makes both the module and author look like crap.

The real failure here is the absence of a "Not applicable" category, which IMO renders the Tester grid worse than worthless. A

As is, with the ratio of 9:1 against any Win32-only module, authors are faced with either accepting a screen of red when there is nothing wrong, or a screen of green when there might be something wrong. It's a piss poor choice either way.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

  • Comment on Re^4: Building Win32::GuiTest for perl 5.14 or higher (Bad tests!)

Replies are listed 'Best First'.
Re^5: Building Win32::GuiTest for perl 5.14 or higher (Bad tests!)
by davido (Cardinal) on Jun 25, 2012 at 19:37 UTC

    The CPAN Tester's Module Author's FAQ provides a good suggestion. I was close but this is better:

    "How can I indicate that my distribution only works on a particular operating system?"

    While it isn't a very elegant solution, the recommend approach is to either die in the Makefile.PL or Build.PL (or BAIL_OUT in a test file) with one of the following messages:

    No support for OS OS unsupported

    CPAN Testers tools will look for one of those phrases and will send an NA (Not Available) report for that platform.


    Dave

      And that is somehow better than assuming the failure of a module named Win32::something from a tester not using that platform is a "Not applicable"?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        I believe it's better in two ways:

        First, a module author gets failure reports via email (if so configured). I would want those failure reports to constitute actual failures that I need to research. I don't need to look into an NA: I know what that is already (or I should). As an author, http://analysis.cpantesters.org provides more useful analysis to me if I withhold from it failures that are not interesting to me. When I look at 250 test reports and 80% of them are failures, even though I know I can ignore the non-Win32 ones, it's easier to visualize what's going on if FAIL means that there was a failure where a PASS was expected, and where NA means there was a failure that was fully expected.

        Second, as a potential user... particularly as a user who is considering a given module as part of a more complex dependency chain, a cursory overview of PASSes and FAILs (rightly or wrongly) will contribute to my decision as to whether this module merits further investigation. If I see an 80% failure rate I might not bother looking further. Shame on me, I know. But it does tell me the author didn't bother to get the installation process "right" (at least where "right" means following the best practices that the CPAN testers promote). If that part is messed up, what else is? Quick judgement call: Move on to one that is apparently less quirky.

        To me it comes down to accurately categorizing issues in a way that provides clarity. NA, not applicable... that's different from FAIL, fix me.

        I'm not saying you're wrong. You are obviously suggesting a more thoughtful approach: Look at the failures more closely. See what's really at issue. That's smart. And we're talking about a case where the reason should be pretty obvious when someone actually looks beyond the glaring failure rate. I'm just suggesting it is nice to get an NA where not applicable really is the case, rather than a fail.

        I think a common ground here is that we can both agree that simply skipping the test suite (generating a PASS) for an operating system that isn't supported is probably not ideal.


        Dave