See Test::MockTime and perhaps some of the other Test Mock modules. I mock time, DBI and MIME::Lite using various of the mock modules with great success.
True laziness is hard work
| [reply] |
In "TDD" as the latest buzzword methodology, writing the tests first is what it means. Why would you need to do that? Because it substitutes for formal requirements. Perhaps it is a formalization of the "acceptance criteria" in an Agile process. Also, different teams may be writing and performing tests, in contrast to those writing the actual code.
More generally, having "critical test cases" defined as part of the design document doesn't mean you need running code for them. It just helps communicate the design and limitations. Test Cases might actually be Use Cases and communicate how the component will be used and what it's good for.
When doing it all myself, I've found that writing the code and (actual) test code together works well. I'll write the constructors and test them. Write a particular tightly-related group of functions and then test them. Repeat until done.
So ask yourself what needs to exist at what stage in the process, and make sure you have a good reason why. What is your process?
| [reply] |
| [reply] |
| [reply] |
I've worked under both MIL STD 2167 and ISO9000 and I can say their both a waste of paper. I haven't looked at TDD but it sounds the same. The best way to ensure high-quality good is to hire excellent programmers. But since you can't always do that, I recommend that you work on the interface design first.
Every piece of software, whether it's a subroutine, module, object, or script, has an interface, either an API or a user interface. Getting it done first gives you a good idea on how to proceed. And once it's done, you can split the work into two: one to write the tests; one to write the code, trying to keep them independent. That way, you have two options on what the interface means and, hopefully, you can catch more bugs that way.
| [reply] |
I generally agree with the notion that “methodologies usually devolve into masses of wasted paper,” yet I think that they usually contain the germ of some really good and useful ideas. The trick is to find a way to glom onto as much of that “goodness” as you can, while following a process that is actually practical. (After all, you are not in the business of writing books and giving expensive seminars, nor do you sell meaningless certifications. Your task is to actually do the work that the pundits merely talk about.)
One of the genuinely good ideas behind TDD is the notion of trying to drill-down into the components of a system and to build test-cases that exercise those components, insofar as is useful, in isolation. And then, at very frequent intervals, to run those tests “hands-free.” You will be surprised how often a change creeps in that breaks something. This technique makes it much harder for those “creeps” to stay around, undetected, long enough to cause trouble. The burglar alarm, so to speak, will go off right away. (After all, usually the hardest part of swatting a bug, besides timely knowing that you have a problem, is finding the damn thing ... and the second hardest part is to make sure that it really is dead and that it stays dead forevermore.)
I also think that there is some genuine “goodness,” but not dogma, in the notion of building tests before, or at least concurrent with, the writing of the thing that is to be tested. And here is why: sometimes the things that we are testing are “big fat hairy things” that are going to take a long time to write. If, for example, that “thing” is going to take one team of programmers two weeks to put together, and during those same two weeks another group of programmers needs to put together something that will eventually interface to it ... how do you assure that the two pieces will actually mesh? The answer is, “measure twice, cut once.” You develop the empty-shell first, and you specify exactly how that shell will respond, and then you create tests which validate that response even though the initial responses are coming from “a dummy,” or mock object. Now, both teams have a target to hit: the developers of the object are shooting to produce something that matches the mocked behavior, while the developers of the object-user are shooting to produce something that matches, first the mocked object, then its actual implementation. This strategy, although it does involve the “additional” cost of developing what will be a throw-away piece of code, does allow both teams to be progressing along the project’s GANTT chart in parallel.
A final “goodness” is that this general approach helps to keep the project well-seated in sync with whatever the project-managers and their GANTT charts are telling to the stakeholders in the project. There is actual data to replace the vague mumblings, and, lo and behold, the programmers also have that data; hence, less reason to resort to vague mumblings. It is much better, for all concerned, to be able to say... “okay, we are off-course now, but we know that we are ‘here.’ Even though we’ve already determined that we ought to be ‘there,’ we are still on the map, and now we can intelligently plot a course-change and maybe still be home by suppertime.” (If you can’t say that, then you have left the harbor without a nautical chart, and there needs to be a strong counter-incentive to actually doing that.) Yeah, it sucks to say that you’re off course, but that’s a helluva lot better than refusing to admit that you are lost at sea.
To me ... “pragmatic, common-sense benefits equals goodness,” and anything else is (probably...) (useless) dogma. It’s very easy for me to see common-sense, pragmatic benefits in what I’ve just described, and within the extents that I described. It passes the “BS test,” I think...
(“HTH,” he mumbles, “I don’t teach seminars.” He quietly steps down off of soapbox, dusts it off, puts it away ...)
| |