Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

How do you structure and run module test code?

by nysus (Parson)
on Feb 23, 2017 at 13:39 UTC ( [id://1182642]=perlquestion: print w/replies, xml ) Need Help??

nysus has asked for the wisdom of the Perl Monks concerning the following question:

I'd love to get some general pointers from pros on how to efficiently create tests for modules.

Using the Module::Starter::PBP module, I notice that the t directory has the following test files by default: 00.load.t,  perlcritic.t,  pod-coverage.t,  pod.t.

Some of the questions I'm trying to answer are: When should I create a new .t file? How should I group my tests into the different .t files? What's best practice for naming the .t files?

Also, I'd like to be able to run the tests as efficiently as possible from vim, my tool of choice. Right now, I'm using Damian Conway's vim configuration which has a short cut for running make on a module. It runs all the tests it finds. I imagine this could slow things down quite a bit, however, if I'm just interested in running a few of the test files. How are the pros running individual test files quickly and efficiently with vim?

Thanks!

$PM = "Perl Monk's";
$MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
$nysus = $PM . ' ' . $MCF;
Click here if you love Perl Monks

Replies are listed 'Best First'.
Re: How do you structure and run module test code?
by Corion (Patriarch) on Feb 23, 2017 at 13:46 UTC

    I think that the Perl Critic test, the POD coverage test and the POD test are author tests.

    I've long distributed these tests along the real functionality tests in t/, but I found that these tests add mostly noise from the CPAN testers as changes in the POD modules or in Perl Critic would generate failures that are not really related to the module functionality and usability. Thus, I would move these tests into a separate directory conventionally named xt/ for the release and author tests.

    Personally, I run my test suite nowadays through

    prove -bl xt t

    For grouping the tests, I usually try to aim for a set of functionality within one test program. This is either one method (for example, string generation of URLs or whatever), or a set of methods (for example, navigation methods in a browser, like "forward", "back", ...). I do this in the expectation that if my test assumptions fail, most likely it will be a single test file that shows the failure and ideally also already gives an indication of what goes wrong.

    Especially for data-driven tests, for example tests that compare known good results to the results of the current implementation, I like to keep these in separate programs because the data-driven tests are usually mostly data and not code themselves.

    I use one shell session for running my tests and my editor in a separate window, so I cannot say how to integrate running the tests into the editor.

Re: How do you structure and run module test code?
by toolic (Bishop) on Feb 23, 2017 at 14:15 UTC
    I use Devel::Cover to make sure I have tests for all my features. This usually means adding checks to existing test files, as opposed to creating new test files.

    Regarding the naming of test files, just use names that are meaningful. I always like to have an extremely simple test which does only one thing related to my code (as opposed to the generic pod/load/critic tests). I usually name it basic.t. This makes things easier to debug when things go horribly wrong.

    After release, if someone submits a bug report, I normally create a dedicated test file that sensitizes that bug.

      That's an excellent point about the bug reports. (++)

      I try to follow this procedure when a bug report is raised:

      1. Write a new test file to reproduce the bug (ie. the test(s) fail on the unpatched code). The test file is named after the report number.
      2. Iteratively: attempt a fix and run the test file until all tests pass.
      3. Release the new version.
      4. Keep the test file in the suite to prevent regression because there is nothing more annoying than having to patch the same bug twice.

      Where possible I try to provide such a test file when reporting bugs in other people's code as it makes less work for them and (hopefully) illustrates precisely what the bug is.

Re: How do you structure and run module test code?
by haukex (Archbishop) on Feb 23, 2017 at 14:53 UTC

    Here is the t/ directory for one of my modules. Not everything in there is perfect (e.g. I should probably split 10_basic.t into multiple files someday), but it works. A few things to note:

    • The tests are numbered so they'll always be run in the same order.
    • I didn't split the author tests into a separate directory, but I do like Corion's idea to do so.
    • I've tried to make the test directory fairly self-contained, for example Config_Perl_Testlib.pm and perlcriticrc are also located in the t/ directory. The tests are hardcoded to this structure for now ("use FindBin (); use lib $FindBin::Bin; use Config_Perl_Testlib;"). While not extremely elegant, again, it works, although it may not be advisable for large projects where the t/ directory might get too cluttered.
    • In 00_smoke.t, note how I use Test::More's BAIL_OUT function to abort the test suite if some basic things go wrong.
    How should I group my tests into the different .t files?

    You actually have a lot of flexibility - the very basics are that prove and similar tools run the t/*.t files and expect Test Anything Protocol output, the only other thing to keep in mind is to set @INC appropriately, e.g. via prove -l. You can start with a single .t file, and as it grows, start splitting it up into multiple files. How you split it is up to you, but consider that while working on a specific part of the module, you may want to run only the tests for that part of the module. You can see by the various responses you've already got that this is a TIMTOWTDI issue :-)

    How are the pros running individual test files quickly and efficiently with vim?

    I run all my Perl from the command line, in the case of tests usually prove -l, as it gives me the closest environment to how my scripts will be executed later. Some IDEs do things with @INC, the working directory, or redirecting STDIN/OUT/ERR that I sometimes don't agree with, so I've found it easiest to simply have a terminal window open.

Re: How do you structure and run module test code?
by choroba (Cardinal) on Feb 23, 2017 at 13:58 UTC
    I usually proceed similarly to Corion. When working on a small project, I often run the tests automatically whenever any file in the structure changes, i.e. after each save. I don't need to leave the editor, but I can check whether I fixed a problem or not.

    At work, though, where the number of .pm files is large, we organize the t/ directory as a direct mirror of the lib/ directory, so for each lib/A/B/C.pm, you can easily find its corresponding t/A/B/C.t test file that tests all its public functions or methods, possibly mocking all dependencies. We have special directories for integration tests and larger tests that might need some parts of the whole software running (e.g. the http server to test the RESTful API).

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Re: How do you structure and run module test code?
by stevieb (Canon) on Feb 23, 2017 at 15:09 UTC

    Great question, and great responses so far. I don't have much to add, but I'll throw out some of my preferences.

    Depending on the complexity of the modules, sometimes I structure my tests like this:

    t/ - dirA | - test1 - test2 - dirB -test1 -test2

    ... like this. This allows me to run an individual directory of tests if I'm just testing a specific area of my code:

    prove t/dirA/*.t

    ...and in the Makefile.PL within the WriteMakefile() function:

    test => {TESTS => 't/*.t t/*/*.t'},

    Other projects, I just have consecutively numbered test files, where each test file typically only focuses on a specific subroutine or issue: like this.

    But mostly, I write tests as I write new subs, and number my test files with gaps in the numbers, eg: 05-load.t, 10-subA.t, 15-subB.t etc. This way, I have gaps in the event I need to add in a new test file at a specific location (between subA.t and subB.t for instance, like this.

    My preferred way changes depending on the project, but mostly I opt to have sequential numbers with the gaps in between like my last example above).

    To boot, because pretty much all of my code is open source and in Github, I use Travis CI for continuous testing on every push, tied with Coveralls.io, which provides (a basic overview) of the test coverage, also on each commit. Here's an overly verbose .travis.yml configuration file that automates Travis/Coveralls for me.

    Then, once I get near or to 100% coverage with Coveralls, I use Devel::Cover as most others do to get fined grained test coverage results, adding tests and/or test files as necessary.

Re: How do you structure and run module test code?
by 1nickt (Canon) on Feb 23, 2017 at 14:16 UTC

    I suggest finding a (simple) CPAN module you like and examining the tests in its distribution.

    The way to figure out what tests to write:

    1. Decide what you want a function to do.
    2. Write the documentation that says "foo() does this when given that"
    3. Write tests in a test file (I would name it descriptively like "t/functions/001-foo.t") that prove that "foo()" does what the doc says. They will fail.
    4. Write the code that makes the tests pass.

    Running tests inside an editor session seems silly to me (but a lot of what TheDamian does and advocates is both way too dogmatic and way too clever for a simple monk like me). I run my tests in a separate shell session, and while they are running I am doing something else useful with the editor or in another shell.

    One thing I strongly suggest is to make your test files not only thematic, but small and simple. Some test files will consist almost completely of test statements ("ok(), is_deeply()" etc) ... in which case it's fine to have lots of tests therein. You might create tests in a loop as well. But once you need even a few lines of code to provide data for the test or make the test run, I think it's much better to have one test per file, since you will probably turn on verbosity as soon as you are working with that test, and the output of all the others will be too noisy.

    Hope this helps!


    The way forward always starts with a minimal test.
Re: How do you structure and run module test code?
by Anonymous Monk on Feb 23, 2017 at 18:55 UTC
    I don't often test modules, but when I do I test the Interface, not the module. O_0

    From the vim I often do stuffs like run whole test suit:

    !prove -lr
    or just run test I am work on:
    !prove -lv %

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1182642]
Approved by Athanasius
Front-paged by davies
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (7)
As of 2024-04-19 12:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found