And that is somehow better than assuming the failure of a module named Win32::something from a tester not using that platform is a "Not applicable"?
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] |
I believe it's better in two ways:
First, a module author gets failure reports via email (if so configured). I would want those failure reports to constitute actual failures that I need to research. I don't need to look into an NA: I know what that is already (or I should). As an author, http://analysis.cpantesters.org provides more useful analysis to me if I withhold from it failures that are not interesting to me. When I look at 250 test reports and 80% of them are failures, even though I know I can ignore the non-Win32 ones, it's easier to visualize what's going on if FAIL means that there was a failure where a PASS was expected, and where NA means there was a failure that was fully expected.
Second, as a potential user... particularly as a user who is considering a given module as part of a more complex dependency chain, a cursory overview of PASSes and FAILs (rightly or wrongly) will contribute to my decision as to whether this module merits further investigation. If I see an 80% failure rate I might not bother looking further. Shame on me, I know. But it does tell me the author didn't bother to get the installation process "right" (at least where "right" means following the best practices that the CPAN testers promote). If that part is messed up, what else is? Quick judgement call: Move on to one that is apparently less quirky.
To me it comes down to accurately categorizing issues in a way that provides clarity. NA, not applicable... that's different from FAIL, fix me.
I'm not saying you're wrong. You are obviously suggesting a more thoughtful approach: Look at the failures more closely. See what's really at issue. That's smart. And we're talking about a case where the reason should be pretty obvious when someone actually looks beyond the glaring failure rate. I'm just suggesting it is nice to get an NA where not applicable really is the case, rather than a fail.
I think a common ground here is that we can both agree that simply skipping the test suite (generating a PASS) for an operating system that isn't supported is probably not ideal.
| [reply] |
First, ...
All of that would be equally applicable if the software translated failures of Win32::* modules from non-windows platforms into NA.
Second, , ...
And so does that.
Indeed, you are just repeating my reasoning.
The only difference between us here is that an author should not have to read a third party add-on's "authors guide" to disarm that third party add-on from scurrilously giving perfectly good modules and authors a bad reputation because the authors of that third party add-on are too damn lazy to fix their software.
It would be the work of minutes for the authors of this crap to fix the problem -- which was pointed out to them years ago. It would take very little effort on their behalf -- ONCE -- to convert failure reports from non-windows testers for Windows-only modules into "Not applicable".
Forcing -- ALL -- the authors of -- ALL -- those modules to have to mess with their preferences and requirements for their modules in order to disarm the misrepresentation of their code by mechanism they didn't ask for and cannot opt out of is just plain wrong!
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] |