This leads me to suspect that software test development is also not best when it designed on a black-box.
Amen to that.
Years ago I was responsible for defining the test plan for an OS/2 replacement for a previos DOS app. I was given the IT department generated (black-box mandated) test plan for the original as a starting point. It consisted of a stack of green&white lined fanfold about 7 inches thick. The first 1 1/2" of that were the tests for checking the handling of the applications configuration file.
Of the 20-something lines in the file, 14 were fully qualified path names to various other files. For each of these there was a comprehensive set of tests that consisted of manually editing the line of the config file to introduce particular errors (non-existant drive letter; non-existant directory; space in a directory name; space in a filename; non-existant file; etc. etc.) and then running the program and ensuring that it detected the errors. This batch of tests (the first 1 1/2") took 2 people a week to run.
Looking inside the code, it was obvious that
Given that the waterfall development method required that these tests be re-run every time the program was modified--even if the change was a totally trivial spelling correction; or something major but in a completely unrelated part of the program--the cost of that "black box innocence" to the reality of the inner workings was huge over the 7 years and hundreds of versions that had been tested.
In reply to Re^2: Reading the manual and knowing if you are getting good
by BrowserUk
in thread Reading the manual and knowing if you are getting good
by ghenry
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |