in reply to Re^3: Testing IS Development in thread Testing IS Development
True. OTOH, I find that writing tests forces me to codify and explicitly state my assumptions (even if not in a form the typical end-user would understand), which, in turn, forces me to think about and identify those assumptions.
That is only added value if you don't think about assumptions when you are coding.
I generally do. I don't suddenly consider assumptions more when I code tests than when I write write code. And I'm not talking about assumptions like "snow is always white". I'm talking "assume the data we're interested in is in table X in database Y on server Z" and I assume that because the company wiki says so. But then it turns out that table Z.Y.X is obsolete, and currently the data lives in tables A, B, C on database D on server E. Testing is not going to find that, because when you make your tests, you make mock data from table Z.Y.X. Tests succeed. Code would have worked fine if indeed the report used data from table Z.Y.X. But since the assumption is wrong - the entire chain falls.
Re^5: Testing IS Development
by sundialsvc4 (Abbot) on Mar 11, 2009 at 15:06 UTC
|
The tests of which you speak here really move into the realm of process data integrity, not the specific testing of any particular application.
Like any “manufacturing production-line,” the shop must have the means to validate where the data is actually coming from, and that the correct parameters were specified to the applications that were run. This is an ongoing part of the daily production process.
This presupposes, of course, that the applications themselves are “known good,” such that it's all essentially worthless if they're not. In other words, they do have a test-suite, it does validate the handling of the data that is flowing through each application, and it does also check that not-valid data will be detected and rejected. Each time an application is deployed to the production environment (by the personnel that is responsible for that ... not the developers themselves), it must clear all tests.
So, the two concerns are complementary to each other, not exclusive.
| [reply] |
|
| [reply] |
|
I respectfully dissent. CPAN, for instance, wouldn't be CPAN without “all those tests.” After all, we don't need to be dealing with somebody else's bugs: we have plenty enough of our own.
Perhaps we can take the viewpoint of Thomas Edison's quote: “I know a hundred ways to build a light bulb that don't work.”
In our case, “we know a hundred ways and places that the code doesn't fail.”
This does not, of course, mean that the software is defect-free, because obviously we know that it does have plenty of defects lurking in there somewhere. So the tests that we do have, give us a good foundation for helping to consider where the defects are much less likely to be.
I would also offer the opinion that this becomes a lot more important when you have a large number of developers working on the same project: there is no longer a single person who “lives, breathes, and sleeps-with this piece of code every day,” who therefore has a gut-instinct about it.
More than just a few people now need to have a basis for determining that the code is (and remains) reliable.
When a bug happens, all of them have to dig for it, and
having some objective sense of where not to start digging (first) is very helpful.
| [reply] |
|
|
|
*A programmer might also assume the area of a circle is 22/7 times the radius, and write his/her tests accordingly.
Yes, but (s)he will at least detect when the assumption, correct or incorrect, no longer holds. See here for a real world example where simple testing most likely would have prevented a serious bug.
--
No matter how great and destructive your problems may seem now, remember, you've probably only seen the tip of them. [1]
| [reply] |
|
|