in reply to Test Driven Development Workflow
I generally agree with the notion that “methodologies usually devolve into masses of wasted paper,” yet I think that they usually contain the germ of some really good and useful ideas. The trick is to find a way to glom onto as much of that “goodness” as you can, while following a process that is actually practical. (After all, you are not in the business of writing books and giving expensive seminars, nor do you sell meaningless certifications. Your task is to actually do the work that the pundits merely talk about.)
One of the genuinely good ideas behind TDD is the notion of trying to drill-down into the components of a system and to build test-cases that exercise those components, insofar as is useful, in isolation. And then, at very frequent intervals, to run those tests “hands-free.” You will be surprised how often a change creeps in that breaks something. This technique makes it much harder for those “creeps” to stay around, undetected, long enough to cause trouble. The burglar alarm, so to speak, will go off right away. (After all, usually the hardest part of swatting a bug, besides timely knowing that you have a problem, is finding the damn thing ... and the second hardest part is to make sure that it really is dead and that it stays dead forevermore.)
I also think that there is some genuine “goodness,” but not dogma, in the notion of building tests before, or at least concurrent with, the writing of the thing that is to be tested. And here is why: sometimes the things that we are testing are “big fat hairy things” that are going to take a long time to write. If, for example, that “thing” is going to take one team of programmers two weeks to put together, and during those same two weeks another group of programmers needs to put together something that will eventually interface to it ... how do you assure that the two pieces will actually mesh? The answer is, “measure twice, cut once.” You develop the empty-shell first, and you specify exactly how that shell will respond, and then you create tests which validate that response even though the initial responses are coming from “a dummy,” or mock object. Now, both teams have a target to hit: the developers of the object are shooting to produce something that matches the mocked behavior, while the developers of the object-user are shooting to produce something that matches, first the mocked object, then its actual implementation. This strategy, although it does involve the “additional” cost of developing what will be a throw-away piece of code, does allow both teams to be progressing along the project’s GANTT chart in parallel.
A final “goodness” is that this general approach helps to keep the project well-seated in sync with whatever the project-managers and their GANTT charts are telling to the stakeholders in the project. There is actual data to replace the vague mumblings, and, lo and behold, the programmers also have that data; hence, less reason to resort to vague mumblings. It is much better, for all concerned, to be able to say... “okay, we are off-course now, but we know that we are ‘here.’ Even though we’ve already determined that we ought to be ‘there,’ we are still on the map, and now we can intelligently plot a course-change and maybe still be home by suppertime.” (If you can’t say that, then you have left the harbor without a nautical chart, and there needs to be a strong counter-incentive to actually doing that.) Yeah, it sucks to say that you’re off course, but that’s a helluva lot better than refusing to admit that you are lost at sea.
To me ... “pragmatic, common-sense benefits equals goodness,” and anything else is (probably...) (useless) dogma. It’s very easy for me to see common-sense, pragmatic benefits in what I’ve just described, and within the extents that I described. It passes the “BS test,” I think...
(“HTH,” he mumbles, “I don’t teach seminars.” He quietly steps down off of soapbox, dusts it off, puts it away ...)