Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

RFC: Basic Testing Tutorial

by hippo (Bishop)
on Jul 05, 2019 at 08:24 UTC ( [id://11102443]=perlmeditation: print w/replies, xml ) Need Help??

Fellow monks, I humbly submit this basic testing tutorial for review. All comments are welcomed - feel free to send a personal message instead of posting a reply if you so wish.

Rationale: There are only a couple of tutorials about testing here in the Monastery and both of them concern mocking. What is missing IMHO is a nice, gentle introduction to testing with some example code to help take out that first massive leap on the learning curve. I was late to the testing game in Perl and it's a regret that I did not get into it sooner. Hopefully this will be useful to those in a similar situation.

Updates: Add examples of is, like and skip as suggested by haukex and stevieb. Add mention of the t/ directory with prove as suggested by haukex. Add mention of done_testing() as suggested by many respondents.

Now published in the Tutorials section as Basic Testing Tutorial.


Basic Testing Tutorial

One of the widely-acknowledged strengths of Perl is its use of testing as a cornerstone. Almost every module on CPAN comes with its own test suite and there are many modules available to help with testing and to make it as painless as possible. For those looking to add testing to an existing code base or looking to get started with TDD here is a guide to the basics.

Test scripts and TAP

Tests in perl are, at their heart, simple scripts which evaluate a series of boolean expressions and output the results. No doubt you could write such a script in a few minutes and the output would tell you how your test criteria have fared. However, there is a standard form to the output of the test scripts which is best adhered to. This is the Test Anything Protocol or TAP for short. By having your script output the results in this format, they can be analysed by a wealth of other programs called TAP Harnesses to provide summary data, highlight failures, etc.

TAP is an open protocol but was written for Perl originally. It is very simple at its heart. The first line of output is a range of test numbers starting at 1 and each subsequent line consists of three fields: a pass/fail flag which is either "ok" or "not ok", the number of the current test and an optional description of that test. Thus a script containing a single, passing test might output this when run:

1..1 ok 1 The expected file "foo" is present

A first test script

So, let's write a simple test script which will output TAP. Suppose we want to check that nobody is running our code far in the past. Here would be a trivial test.

use strict; use warnings; use Time::Piece; print "1..1\n"; print localtime(time)->year > 1999 ? 'ok' : 'not ok'; print " 1 Not in a previous century\n";

When we run this without time-travelling we see this output:

1..1 ok 1 Not in a previous century

and we see that our test has passed.

Test modules

Of course there are modules on CPAN to help with testing. The simplest of these is, appropriately enough, Test::Simple which is a core module. It is little more than a wrapper with 2 handy functions to ensure that your TAP output is in the correct format. We can rewrite our simple century test with this module:

use strict; use warnings; use Time::Piece; use Test::Simple tests => 1; ok localtime(time)->year > 1999, 'Not in a previous century';

Now there are no print statements because the module takes care of all the output. The tests => 1 on line 4 sets the number of tests we expect to run, so we no longer need to print "1..1\n" ourselves. Similarly, the ok function evaluates the first argument as a boolean expresssion and outputs the correct TAP line as a result. The second argument is the optional description.

Technically it is optional but I would encourage you very strongly to include a description for any test. If you have a script with say 50 tests in it and test 37 fails but has no description, how will you know what is wrong? Make life easy for yourself (and your collaborators and even the users) by describing each test in the TAP output.

Other testing functions

While the ok function is useful, the output is a simple pass/fail - it doesn't say how it failed. If our century test fails we don't know what year it thinks it is. For that we would need to write more code or use code someone else has written. Fortunately there is a plethora of other testing modules to choose from, the most common of which is Test::More (also in core). This gives us a heap of other functions so that we can easily perform different types of evaluations and receive better feedback when they fail.

Let's use Test::More and its handy cmp_ok function in our script.

use strict; use warnings; use Time::Piece; use Test::More tests => 1; cmp_ok localtime(time)->_year, '>', 1999, 'Not in a previous century';

Note that I've introduced a bug here (using _year instead of year) so that the test will likely fail. Now our test output looks like this:

1..1 not ok 1 - Not in a previous century # Failed test 'Not in a previous century' # at /tmp/bar.t line 6. # '119' # > # '1999' # Looks like you failed 1 test of 1.

We can see at a glance what is being tested and that the year we actually have (119) is clearly wrong so we need to fix the bug. All lines in TAP which start with a hash (#) are comments for the reader: Test::More and friends use this to give us verbose reports about how things have gone wrong.

There are a number of other useful comparator functions in Test::More such as is for simple equality, like for regex and so on. These are fully explained in the Test::More documentation, but their usage is quite straightforward. Let's add a couple of other tests to see how they are used.

use strict; use warnings; use Time::Piece; use Test::More tests => 3; my $now = localtime (time); cmp_ok $now->_year, '>', 1999, 'Not in a previous century'; is $now->time, $now->hms, 'The time() and hms() methods give the same +result'; like $now->fullday, qr/day$/, 'The dayname ends in "day"';

There are also control flow structures such as skip to avoid running tests in certain circumstances such as an invalid underlying O/S or absence of a particular module. We could use this here to skip the test of the dayname if a non-English locale applies.

use strict; use warnings; use Time::Piece 1.31_02; use Test::More tests => 3; Time::Piece->use_locale; my $now = localtime (time); cmp_ok $now->_year, '>', 1999, 'Not in a previous century'; is $now->time, $now->hms, 'The time() and hms() methods give the same +result'; SKIP: { skip 'Non-English locale', 1 unless substr ($ENV{LANG} // 'en', 0, + 2) eq 'en'; like $now->fullday, qr/day$/, 'The dayname ends in "day"'; }

Further still there are other modules in the Test::* namespace to help with all manner of scenarios.

Working to a plan

It may be the case that the precise number of tests in the script is not known or may change frequently. In those situations, specifying the number of tests like use Test::More tests => 3; can become unwieldy or problematic. Instead we can just use Test::More; and then specify the plan later.

One method of doing this is to call plan () as a stand-alone statement. If the number of tests is dependent on an array which is only computed at run time we could write

plan tests => scalar @array;

once the array has been populated.

Another approach is to use done_testing (scalar @array); but as its name suggests this must only be called after the final test has been run. The number of tests can even be omitted entirely here but that removes the check that all the tests expected have indeed run, of course.

done_testing (); exit;

Using a harness

If you have installed a module from CPAN you will probably have noticed the test phase running. You can use the same harness on your own test scripts by running the prove command. By default this condenses the results of tests and at the end provides a summary of which tests in which files have failed, how long the run took, etc. eg:

$ prove /tmp/bar.t /tmp/bar.t .. 1/3 # Failed test 'Not in a previous century' # at /tmp/bar.t line 8. # '119' # > # '1999' # Looks like you failed 1 test of 3. /tmp/bar.t .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/3 subtests (less 1 skipped subtest: 1 okay) Test Summary Report ------------------- /tmp/bar.t (Wstat: 256 Tests: 3 Failed: 1) Failed test: 1 Non-zero exit status: 1 Files=1, Tests=3, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.06 cusr + 0.01 csys = 0.10 CPU) Result: FAIL

This is particularly useful for larger projects with many scripts/modules each of which has many tests. If prove is run with no arguments it will look for files matching t/*.t and run all of those in sequence.

Test Driven Development

Now that you can test your code you can consider TDD as a methodology. By writing the tests before the code you are setting out what you expect the code to do - it's a formal representation of the specification. Doing so is a skill in itself and many people make a career out of being a tester.

See Also

Replies are listed 'Best First'.
Re: RFC: Basic Testing Tutorial
by haukex (Archbishop) on Jul 07, 2019 at 10:02 UTC

    Very nice!!

    I just have a couple of very minor ideas.

    • Maybe mention done_testing as an option? I always start with use Test::More; ...; done_testing;, and then when I'm done with the first pass of writing tests, I switch over to use Test::More tests => N;.
    • Maybe give example of is, since it's a pretty common one?
    • You could perhaps also mention the t directory: it's fairly easy to create a t directory beneath the location of the script being tested, and then just run prove.

      Thanks for taking the time to read the tutorial and for your suggestions. I've added an example of is now as that is pretty fundamental.

      While I do take your point about done_testing it would be very easy for a beginner to miss the last step of reverting to a specific plan at the end and therefore maybe miss that the number of tests is not as expected. As it is documented in full in Test::More anyway I'm not entirely convinced of the benefit of repeating that here.

      I'm slightly more minded to mention t as a directory for tests but again if the user is at that level, they're probably looking more at module-writing tutorials where that's covered nicely - and I've already linked to Discipulus's great post on that at the end. Does it need mentioning here still do you think?

        I don' t know where it falls in skill or teaching target but I strongly prefer the done_testing($n) idiom. The main reason is it lends itself to flexibility and a bit of intention documentation in growing/maintaining tests, for example cases like this–

        my %tests = ( case1 => { input => …, output => … }, case2 => { … } ); subtest "Something" => sub { plan skip_all => "Moon is not full" unless …; ok 1; ok 2; done_testing(2); }; for my $test ( values %tests ) { ok $test->{input}, "Have input…"; is process($test->{input}), $test->{output}, "Process works"; } done_testing( 1 + 2 * keys %tests );
        it would be very easy for a beginner to miss the last step of reverting to a specific plan at the end

        Yes, that's true - it's fine to leave it out, of course.

        mention t as a directory for tests

        An alternative to explaining how to set up a t directory might be to just mention prove's default behavior when not given a filename, something along the lines of "You can give prove a list of test files to run, or if you don't give it any filenames, it'll look for files matching t/*.t and run those." - that might give enough of a hint.

Re: RFC: Basic Testing Tutorial
by stevieb (Canon) on Jul 07, 2019 at 16:11 UTC

    Great post, hippo!

    Like haukex, I've just got a couple of suggestions. I'll stay away from more advanced testing functionality as this is a "Basic" tutorial.

    Ensuring things die() when you expect them to, in combination of the like() and is() functions. If the call within the eval() succeeds, the 1 will be returned. If not, it dies, and the eval() returns undef:

    for (-1, 4096){ is eval { $e->read($_); 1; }, undef, "read() with $_ as addr param fails"; like $@, qr/address parameter out of range/, "...and error is sane +"; }

    Always use sane test messages, so that it's trivially easy to see the output and quickly identify in your test script where the test actually is. In the above, it specifies exactly what I'm testing (read()), it states that I'm specifically testing the addr param, and even signifies which iteration of the loop did/didn't break ($_).

    How to use skip. This is a basic piece of functionality that a blooming unit test writer needs to know. There are several ways and reasons to use this, but I'll stick to the most basic premises:

    Skip all tests in a file for any reason:

    plan skip_all => "i2c bus currently disabled for other purposes";

    Skip all tests in a file based on external flag:

    if (! $ENV{SERIAL_ATTACHED}){ plan skip_all => "The damned serial port shorted out!"; }

    Skip all tests in a file if certain software isn't installed (stolen from a POD test):

    my $min_tpc = 1.08; eval "use Test::Pod::Coverage $min_tpc"; plan skip_all => "Test::Pod::Coverage $min_tpc required for testing PO +D coverage" if $@;

    Speaking of testing, yesterday I reached the 10,000 test mark on one of my larger, more elaborate projects :)

    Files=61, Tests=10032, 586 wallclock secs ( 8.68 usr  0.50 sys + 149.28 cusr  8.14 csys = 166.60 CPU)

      Thanks, stevieb. I've added in an example of skip now as that is definitely worth bringing to the beginner's attention.

      Regarding the error trapping, if anything I'd be more inclined to steer the new tester towards Test::Exception or Test::Fatal rather than rolling their own. What's your reasoning for using the bare eval instead?

        "What's your reasoning for using the bare eval instead?"

        Habit, less prerequisites in my distributions, and less abstraction. I'm not saying that a distribution isn't a good idea, I just avoid going that route where I can.

Re: RFC: Basic Testing Tutorial
by eyepopslikeamosquito (Archbishop) on Jul 11, 2019 at 22:07 UTC

      Thanks for these links. I've added the former as it is succint and itself links to the latter.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://11102443]
Front-paged by Discipulus
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (3)
As of 2024-04-18 23:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found