in reply to Re^5: Self-testing modules
in thread Self-testing modules
Most bug arise as a result of the programmer making assumptions. If the same programmer writes the tests for the code s/he wrote, they will make the same assumptions. The net result is they write tests for every case they considered when writing the code, which all pass--giving them N of N tests (100%) passed and a hugely false sense of security.
I find that this does not happen if you're using TDD. When you only write code by producing a failing test you are forced to challenge the assumptions in your code at every stage. Every time you make something work the next stage is "how do I break this".
Therefore, the only way to test code is to test is complience against a (rigourous) specification, and derive security through statistics.
As you can probably guess I don't agree with with the "therefore" and "only" :-)
Specification based testing is a great tool, but it's certainly not the be-all and end-all of testing. It brings it's own set of good and bad points to the table, and is still affected by bad developer assumptions about the code. They're just assumptions of a different kind.
There's a whole bunch of different ways to go about testing. From specification based tests like Test::LectroTest, xUnit frameworks like Test::Class, procedural tests like the basic uses of Test::More and friends, data driven tests like Test::Base, exploratory testing, integration testing frameworks like FIT, etc.
Take a look at Lessons Learned in Software Testing for a great book on the multitude of useful approaches to testing.
Not to mention practices like Test Driven Development and Design By Contract.
Picking the best tool for the work at hand is part of job.
LectroTest isn't perfect (yet). It has fallen into the trap of becoming "expectation complient" in as much as it plays the Test::Harness game of supplying lots of warm-fuzzies in the form of ok()s, and perpectuating the anomoly of reporting 99.73% passed instead of 0.27 "failed", or better still:
Well the nice thing about Perl is that if you don't like the test reporting you can always change it. In fact, since I spent a chunk of yesterday re-learning how to fiddle with Test::Harness::Straps...
.... insert sound of typing here ...
...there you go.
Personally, since test suites in Perl take so darn long to run, I like seeing the okays whirl past in the background since it let's me know the darn thing hasn't hung.
I realise that anything less than 100% pass at the end means I've fucked up, warm-fuzzies or not.
Preferably dropping the programmer into the debugger at the point of failure. Even more preferably, in such a way that the program can be back-stepped to the point of invocation and single stepped through the code with the failing parameters in-place so that the failure can be followed.
There are already so called Omniscient Debugging tools available for Java. So I guess it's just a trivial matter of programming :-)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^7: Self-testing modules
by tlm (Prior) on Jul 31, 2005 at 20:45 UTC | |
by adrianh (Chancellor) on Aug 01, 2005 at 10:57 UTC | |
by tlm (Prior) on Aug 02, 2005 at 02:48 UTC |