talexb has asked for the wisdom of the Perl Monks concerning the following question:

After many years of writing Perl I am finally starting to write tests for the modules that I've written, and this is presenting me with a bit of a challenge.

My first test file starts with .. shall we say .. a base object, from which all other objects flow. My plan is to be able to run this test right after an install to check that some bare bones functionality is present, and again after I've primed the system with a few test files.

Thus, the second test run will be more comprehensive. I believe I know how to do that, using Schwern's excellent Test::More and the SKIP: feature, but I'm concerned that I'm going to end up with a larger and larger test file. Should I, or can I, break out the other tests into separate files in an attempt to modularize? Or is there a cunning way to run groups of tests from a superior file?

I welcome your feedback.

Alex / talexb / Toronto

"Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

  • Comment on How to structure tests that span several modules

Replies are listed 'Best First'.
Re: How to structure tests that span several modules
by metaperl (Curate) on Feb 03, 2005 at 21:43 UTC
    Should I, or can I, break out the other tests into separate files in an attempt to modularize? Or is there a cunning way to run groups of tests from a superior file?
    Most people simply alphabetize the test file names so that they run in a particular order.

    It sounds like you need to do some things manually between test1 and test2? If so, then use Term::Readkey or some other things of receiving input from the tester that it is time to continue.

    So no, I don't think skip is what you want. I think you basically would have:

    0-system_check.t 1-read_user_input.t 2-retest_on_files.t

      Hmm. Well, now I understand why the test files are numbered -- so they're done in a particular order.

      But I think this doesn't help me with the question that I have about modularizing the files. In addition, The 'retest' on your list would not necessarily include the 'system_check' tests. OK, well, I think I have enough information to go ahead and try this out.

      And I'll use the diag method to inform the user of what's going on and perhaps get them to run some additional tests with parameter values (document ID of a specific test file, for example).

      Alex / talexb / Toronto

      "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

Re: How to structure tests that span several modules
by jplindstrom (Monsignor) on Feb 03, 2005 at 22:21 UTC
    This is my setup for a generic project:

    source bin lib t data

    The lib directory (and below) contains the source files (for My::Module::Name in My/Module/Name.pm).

    The t directory contains all the test files, one for each class (e.g. My-Module-Name.t). Sometimes there are extra files for some classes if some aspects of it need more tests (e.g. My-Module-Name-convert.t). I don't care about the order of tests being run; if something breaks, it's usually easy to figure out the root cause of the breakage.

    The t/data directory contains data used by the tests.

    Basic classes are easily tested by themselves. Other classes have dependencies, and then it becomes necessary to set up an environment for them to work. I do this in each file that needs it. Sometimes it becomes very repetitive, and it may be a good idea to refactor the setup test code into a class (e.g. SetupFoo.pm) which I put in the t directory and use from the .t files.

    For complicated chains of depdent classes or external resources like web server access or time, I find it useful to fake it with Test::MockObject. WWW::Mechanize::Cached is also useful to fake web sites.

    Well, that's how I do it.

    /J

Re: How to structure tests that span several modules
by CountZero (Bishop) on Feb 03, 2005 at 22:27 UTC
    Yes I think it is advisable to split your tests in small(ish) modules, each testing a certain aspect of your code. That way you have a greater chance to "recycle" these tests for other code: e.g. a test to check if the module loads; another to test whether object are properly instantiated; if standard input is giving the expected results; if edge cases are properly handled; ... You will probably need a similar (but not exactly the same) suite of tests for many of your scripts. Having a score of "standard" tests available (a "test library") would help you install tests easier and with more confidence.

    Putting this all in one large test script is asking for trouble: after a while you would need a test to check the tests!.

    CountZero

    "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law

Re: How to structure tests that span several modules
by dragonchild (Archbishop) on Feb 04, 2005 at 02:00 UTC
    You might want to look at mock objects, specifically

    They will help decouple the various components of your system so you can focus on testing one specific piece. This will also allow you to simulate failures that you can't reliably create so you can test your error-handling in a safe and repeatable fashion.

    Being right, does not endow the right to be rude; politeness costs nothing.
    Being unknowing, is not the same as being stupid.
    Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
    Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

Re: How to structure tests that span several modules
by xdg (Monsignor) on Feb 04, 2005 at 13:15 UTC

    I can offer several thoughts from my own practices -- no claim that any is right (clearly I don't even have a really consistent standard), but one or more of them may resonate for you.

    • For a really simple module, all the tests go in one file in the t/ directory
    • For utility modules, I tend to break out test scripts somewhat ad hoc to test different types of functionality. (You might think of this as testing 'use cases')
    • For a distribution with lots of classes/subclasses, I've generally gone with one test file per module, testing whatever is unique/different about that module
    • In one case, with a really complicated object hierarchy, I've set up subdirectories within t/ to mimic my object hierarchy and customized the Build.PL to search t/ recursively.
    • If I find myself with a lot of redundant setup code, I create a "TestHelper.pm" module in the t/ directory and have my test scripts load that. (Though I recently learned about Test::Class and think it might be a better approach for that kind of thing.)

    Generally, I consider test scripts cheap and don't hesistate to create a new one to test a narrow set of functionality. That said, I try to keep related tests grouped together in one file.

    Also, in your core toolkit of Test modules, make sure you look at Test::Exception. I use it so frequently that my default .t template automatically sticks that in along with Test::More. (And if you're doing a lot of floating-point math, you might appreciate my own Test::Number::Delta, too.)

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: How to structure tests that span several modules
by halley (Prior) on Feb 04, 2005 at 16:37 UTC
    I have a complicated and interconnected module library. In my t/ directory, I have 0.t script which runs all the other *.t scripts. It then runs figlet to tell me:
    ___ _ _ __ __ ___ ___ ___ (_-<| || |/ _|/ _|/ -_)(_-<(_-< /__/ \_,_|\__|\__|\___|/__//__/
    or
    __ _ _ / _| __ _ (_)| | _ _ _ _ ___ | _|/ _` || || || || || '_|/ -_) |_| \__,_||_||_| \_,_||_| \___|
    Works nicely, since I can run individual test groups as I develop them, then a sanity check of several hundred tests before I commit to version control.

    --
    [ e d @ h a l l e y . c c ]

Re: How to structure tests that span several modules
by bluto (Curate) on Feb 04, 2005 at 21:20 UTC
    One simple way I used for a project had a special script that knew about test ordering as well as group ordering. Basically, this was set up in the __DATA__ section of the test script something like...

    #testname testgroup base.t core dvd_core.t core dvd_mount.t dvdread dvd_read.t dvdread dvd_burn.t dvdwrite tape_core.t core tape_mount.t taperead tape_read.t taperead tape_write.t tapewrite

    This described the test order (from top to bottom) and the test group name. The user would invoke the script like "test.pl taperead" (or via make) which would run the non-hardware tests ("core") and then test mounting and reading (but not writing) a tape. That way I could just run the tests I wanted (some were very time consuming) and I didn't have to put checks in the various scripts for hardware existence and mess with embedding Test::More SKIP sections everywhere. Obviously you can get sophisticated here wrt groupings, but it's pretty easy to set up a basic group dependency list with a hash.