actualize has asked for the wisdom of the Perl Monks concerning the following question:

Hello Fellow Monks,

I have inherited some code at work and have decided to use the project as an excuse to make my foray into TDD. I have made some progress by writing a test, then making sure the code passes the test. After that I move on to the next test. I am impressed at how this process easily catches when I break something unintentionally.

However, I read about people writing all of their tests first and then writing the code to fit the tests after. Is this the preferred method? Is writing tests then writing code a valid way of doing things?

Also, one more thing. I had a method that used time(). But that's not testable because if I call time in the test case, it will be different than when it is called within the method; making a correct comparison impossible. so I moved the time() call outside of the method and passed the result as an argument so I could easily compare my test with the output. Is this kind of change helping me by making my code more testable or is there a better way to write the test without modifying the method. I guess I want to know is, how much of my code should I be changing to accommodate tests?

Replies are listed 'Best First'.
Re: Test Driven Development Workflow
by GrandFather (Saint) on May 20, 2011 at 21:36 UTC

    See Test::MockTime and perhaps some of the other Test Mock modules. I mock time, DBI and MIME::Lite using various of the mock modules with great success.

    True laziness is hard work
Re: Test Driven Development Workflow
by John M. Dlugosz (Monsignor) on May 20, 2011 at 19:29 UTC
    In "TDD" as the latest buzzword methodology, writing the tests first is what it means. Why would you need to do that? Because it substitutes for formal requirements. Perhaps it is a formalization of the "acceptance criteria" in an Agile process. Also, different teams may be writing and performing tests, in contrast to those writing the actual code.

    More generally, having "critical test cases" defined as part of the design document doesn't mean you need running code for them. It just helps communicate the design and limitations. Test Cases might actually be Use Cases and communicate how the component will be used and what it's good for.

    When doing it all myself, I've found that writing the code and (actual) test code together works well. I'll write the constructors and test them. Write a particular tightly-related group of functions and then test them. Repeat until done.

    So ask yourself what needs to exist at what stage in the process, and make sure you have a good reason why. What is your process?

      Why would you need to do that? Because it substitutes for formal requirements.

      I don't understand. Why would TDD substitute for formal requirements? Which parts of TDD are incompatible with working with formal requirements?

        It's not that TDD is incompatible with having formal requirements. It's that "Agile" Scum/XP etc. refuse to use the "R" word. The role of having requirements needs to be filled by some other means. For example, the Acceptance Criteria written on the back of the Story Card. TDD and Agile are often used together. Without formal requirements going into a large effort (not a single "story" but a full design that will comprise many stories) something has to take the place of the Requirements. That is often the TDD process.

        So if you are using pre-written test cases instead of a document of formal requirements, then it's important to do a good job of it and indeed to write them first.

Re: Test Driven Development Workflow
by shawnhcorey (Friar) on May 21, 2011 at 12:30 UTC

    I've worked under both MIL STD 2167 and ISO9000 and I can say their both a waste of paper. I haven't looked at TDD but it sounds the same. The best way to ensure high-quality good is to hire excellent programmers. But since you can't always do that, I recommend that you work on the interface design first.

    Every piece of software, whether it's a subroutine, module, object, or script, has an interface, either an API or a user interface. Getting it done first gives you a good idea on how to proceed. And once it's done, you can split the work into two: one to write the tests; one to write the code, trying to keep them independent. That way, you have two options on what the interface means and, hopefully, you can catch more bugs that way.

Re: Test Driven Development Workflow
by locked_user sundialsvc4 (Abbot) on May 23, 2011 at 12:50 UTC

    I generally agree with the notion that “methodologies usually devolve into masses of wasted paper,” yet I think that they usually contain the germ of some really good and useful ideas.   The trick is to find a way to glom onto as much of that “goodness” as you can, while following a process that is actually practical.   (After all, you are not in the business of writing books and giving expensive seminars, nor do you sell meaningless certifications.   Your task is to actually do the work that the pundits merely talk about.)

    One of the genuinely good ideas behind TDD is the notion of trying to drill-down into the components of a system and to build test-cases that exercise those components, insofar as is useful, in isolation.   And then, at very frequent intervals, to run those tests “hands-free.”   You will be surprised how often a change creeps in that breaks something.   This technique makes it much harder for those “creeps” to stay around, undetected, long enough to cause trouble.   The burglar alarm, so to speak, will go off right away.   (After all, usually the hardest part of swatting a bug, besides timely knowing that you have a problem, is finding the damn thing ... and the second hardest part is to make sure that it really is dead and that it stays dead forevermore.)

    I also think that there is some genuine “goodness,” but not dogma, in the notion of building tests before, or at least concurrent with, the writing of the thing that is to be tested.   And here is why:   sometimes the things that we are testing are “big fat hairy things” that are going to take a long time to write.   If, for example, that “thing” is going to take one team of programmers two weeks to put together, and during those same two weeks another group of programmers needs to put together something that will eventually interface to it ... how do you assure that the two pieces will actually mesh?   The answer is, “measure twice, cut once.”   You develop the empty-shell first, and you specify exactly how that shell will respond, and then you create tests which validate that response even though the initial responses are coming from “a dummy,” or mock object.   Now, both teams have a target to hit:   the developers of the object are shooting to produce something that matches the mocked behavior, while the developers of the object-user are shooting to produce something that matches, first the mocked object, then its actual implementation.   This strategy, although it does involve the “additional” cost of developing what will be a throw-away piece of code, does allow both teams to be progressing along the project’s GANTT chart in parallel.

    A final “goodness” is that this general approach helps to keep the project well-seated in sync with whatever the project-managers and their GANTT charts are telling to the stakeholders in the project.   There is actual data to replace the vague mumblings, and, lo and behold, the programmers also have that data; hence, less reason to resort to vague mumblings.   It is much better, for all concerned, to be able to say... “okay, we are off-course now, but we know that we are ‘here.’   Even though we’ve already determined that we ought to be ‘there,’ we are still on the map, and now we can intelligently plot a course-change and maybe still be home by suppertime.”   (If you can’t say that, then you have left the harbor without a nautical chart, and there needs to be a strong counter-incentive to actually doing that.)   Yeah, it sucks to say that you’re off course, but that’s a helluva lot better than refusing to admit that you are lost at sea.

    To me ... “pragmatic, common-sense benefits equals goodness,” and anything else is (probably...) (useless) dogma.   It’s very easy for me to see common-sense, pragmatic benefits in what I’ve just described, and within the extents that I described.   It passes the “BS test,” I think...

    (“HTH,” he mumbles, “I don’t teach seminars.”   He quietly steps down off of soapbox, dusts it off, puts it away ...)