I'm working on a some tests for a module and while I can figure out most cases, there are a few where I'm having trouble deciding weather to skip a test or fail a test.
For example, if 1+1 = 3, I know I need to send a fail message. If I'm testing file ownership on Dos, I know I should skip. What about cases in between?
My first instinct is to be conservative and fail when in doubt. Unfortunately, a lot of perl modules fail tests even when the modules work, and a lot of users ignore failed tests and install with force. A lot of modules test well if you're installing manually, you've read the README, and you set the correct environment variables, but fail miserably when installed with CPAN. I want tests that really mean it when they fail, so that makes me think, when in doubt, skip.
What kind of guidelines do you follow when deciding those borderline cases?
Update: Okay, so it sounds like the general consensus is, if users are dumb enough to ignore the test results and force an install, that's their own fault. We shouldn't make our tests more forgiving to accomodate them.
Thanks!
-Pileofrogs
In reply to Skip Vs. Fail by pileofrogs
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |