For an arbitrary function foo, to test it, almost always, several tests are needed. The general idea being to supply in-range and out-of-range values for each of the inputs to foo. Is the simplest cases, each input has a defined valid range and you make sure that, at minimum, each input gets values above the maximum, below the minimum, just below the max, just above the min and 1 or more points distributed between max and min. Obviously, as the number of inputs increases, the combinatorics will quickly get impractical, so you have to come up with a reasonable and workable subset of tests.
In more complex cases, the (valid, in-range) value of one input can affect the valid range of another input. This is dependant on the intended behavior of foo
But, even with a well engineered test suite, it's still possible the tests won't protect you from changes to even a very thoroughly tested function. It's still possible to miss corners (and other nexi) of the n-dimensional hyper-box.
Even assuming your existing test suite is well engineered, you are very likely to find that it won't protect you from code changes, however reasonable they are.
When looking at the code and planning a change, think about what might go wrong, then create more tests to cover those. Then after a reasonable number of changes have been made, hold a code review. Even if you have only yourself to do the review. Sometimes, after a few days and working on (or looking at) other code, you can go back and be able to see problems you couldn't when you made the changes.
In reply to Re: Testing my tests
by RonW
in thread Testing my tests (mutation testing)
by szabgab
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |