in reply to Testing 1...2...3...

"If it hasn't been tested, it doesn't work."
What kind of tests are you referring to? Unit tests? System tests? Acceptance tests? Usability tests? Stress tests? ADA compliance tests? Colour-blindness tests? Security tests? Cross-platform tests? Data-consistency tests? Data-validity tests? Code reviews? Internal auditing? External auditing? Backwards compatability tests? Switch-over tests? Restore-from-backup tests? Disaster recovery tests? Disk-full tests? "What-happens-if-I-yank-a-cable" tests? "Let's-change-the-password-halfway-the-procedure" tests? "Send-it-random-data-for-24-hours" tests? "On-call-phone" tests? Power-consumption tests? Power-failure tests? Fire drills? Climate tests? Copyright violation tests? Patent violation tests?

What's the last project you did where you tested at least half of the tests I listed? (Except for the last 3, I've done all the tests I listed, but never on a the same project).

Replies are listed 'Best First'.
Re^2: Testing 1...2...3...
by raybies (Chaplain) on Dec 08, 2010 at 13:27 UTC

    Heh. Great list. There are other tests too, like Protocol/Standards Certification tests, Code-Coverage tests, Timing-Delay/Jitter tests, Quality/Customer/Play Tests and Performance Testing.

    There are also various model for test, including Goldenmodel Comparision testing, Blackbox Testing, Whitebox Testing, Directed Tests, Cornercase Testing, Hardware/Software Emulator Tests and Random Stimulus (You mentioned this, though why limit it to 24 hours? ;)).

    And then of course the tests that test the tests that test the tests that test the tests... :)

    Personally, I came to programming as a vehicle for testing hardware models (which were actually software models of what would be made into hardware... heh.) And the majority of the tests you mention above were necessary to insure that the price of developing a chip were kept to a minimum. In that model, it pays to test thing exhaustively because the cost of additional tapeouts (essentially bugfixes on a chip) are prohibitively high (around a million bucks a turn at the time). In that developer paradigm, the price of a bug was so high, that even the dopiest management saw the value in early frontloading of the test teams--and we divided those who wrote the tests from those who did the development. On the best-tested projects we developed test suites in parallel with the developers from day one of the design.

    Perhaps I'm paranoid, but working with my latest developers has not been anywhere close to the level of testing discipline. And "amazingly enough" the software is remarkably brittle.

    One nice thing about Perl is that it enables me to throw a lot of data around without getting in the way of the test object and in a timely manner.