Your Mother has asked for the wisdom of the Perl Monks concerning the following question:

(Update: Solved to my satisfaction for this particular problem. New runner in comment below.)

I have a big test that has quite a few configuration options and reasons to run subtests or not depending on environment. Works great but it cannot play well with the master runner because there are a couple hundred other tests that are mostly without the general environment concerns and none with the same granularity of configuration.

I want to run the test with three (or more) configurations/environments. I can certainly just do that inside the test. It’s trivial. However! It’s already a huge test, I don’t know that my current permutations are final, it’s confusing enough already and another layer or two of indentation and wrapping all of the subtests… it might be trivial to write, it will not be fun to read or edit, and it won’t be as clean to exit, skip, rerun, run alone/once with a single set of the permutations as it is now…

So… This is a summary of my DWIM first idea, but not something that actually does it –

run.t.raw, the original test

#!perl use strictures; use Test::More; subtest "Prod" => sub { plan skip_all => "Set PROD_TEST to run" unless $ENV{PROD_TEST}; ok 1, "OHAI, PROD!"; done_testing(1); }; subtest "Dev" => sub { plan skip_all => "Set DEV_TEST to run" unless $ENV{DEV_TEST}; ok 1, "OHAI, DEV!"; done_testing(1); }; done_testing(2);

runner.t

#!perl # File: runner.t use strictures; use App::Prove; for my $env (qw/ PROD DEV /) { my $test = join "_", $env, "TEST"; local $ENV{$test} = 1; my $app = App::Prove->new; $app->process_args( -I => "./", -v => "run.t.raw" ); eval { $app->run } or die "Failed... not proceeding: $@"; }

Without runner / normal prove

prove -v run.t.raw run.t.raw .. # Subtest: Prod 1..0 # SKIP Set PROD_TEST to run ok 1 # skip Set PROD_TEST to run # Subtest: Dev 1..0 # SKIP Set DEV_TEST to run ok 2 # skip Set DEV_TEST to run 1..2 ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.06 cusr + 0.01 csys = 0.10 CPU) Result: PASS

How it falls down / fails to DWIM

prove runner.t runner.t .. All 2 subtests passed (less 2 skipped subtests: 0 okay) Test Summary Report ------------------- runner.t (Wstat: 0 Tests: 6 Failed: 4) Failed tests: 1-3, 6 Parse errors: Plan (1..2) must be at the beginning or end of the TAP + output Tests out of sequence. Found (1) but expected (4) Tests out of sequence. Found (2) but expected (5) More than one plan found in TAP output Bad plan. You planned 2 tests but ran 6. Files=1, Tests=6, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.18 cusr + 0.05 csys = 0.25 CPU) Result: FAIL

Sidenote, works perfectly but cannot be run with prove or in normal harness

perl runner.t # … All tests successful.

My question: How can I make the “runner” act like a single “master” test in a clean way? I may end up doing it in a messy way but I feel certain that I’m just missing some simple idea. I tried some do and straight eval stuff but without getting kind of complicated, it won’t do the right thing because it issues too many start/stop/count statements for the runner to accept it as one “master” test.

Replies are listed 'Best First'.
Re: Test runner that acts like a test and can be run as one
by hippo (Archbishop) on Apr 02, 2020 at 08:17 UTC

    Not sure if you can make runner.t work with prove without altering run.t.raw slightly. It will if you change the raw file to make the last line conditional in a similar way to this:

    #!perl use strictures; use Test::More; subtest "Prod" => sub { plan skip_all => "Set PROD_TEST to run" unless $ENV{PROD_TEST}; ok 1, "OHAI, PROD!"; done_testing(1); }; subtest "Dev" => sub { plan skip_all => "Set DEV_TEST to run" unless $ENV{DEV_TEST}; ok 1, "OHAI, DEV!"; done_testing(1); }; done_testing(2) if $0 =~ /run\.t\.raw/;

    Then the runner can just become

    #!perl # File: runner.t use strictures; use Test::More; for my $env (qw/ PROD DEV /) { my $test = join "_", $env, "TEST"; local $ENV{$test} = 1; do 'run.t.raw'; } done_testing (4);

    I don't know if that's good enough for your criteria? Here's what happens when I run these:

    $ perl run.t.raw # Subtest: Prod 1..0 # SKIP Set PROD_TEST to run ok 1 # skip Set PROD_TEST to run # Subtest: Dev 1..0 # SKIP Set DEV_TEST to run ok 2 # skip Set DEV_TEST to run 1..2 $ prove run.t.raw run.t.raw .. ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.03 usr 0.01 sys + 0.02 cusr + 0.00 csys = 0.06 CPU) Result: PASS $ perl runner.t # Subtest: Prod ok 1 - OHAI, PROD! 1..1 ok 1 - Prod # Subtest: Dev 1..0 # SKIP Set DEV_TEST to run ok 2 # skip Set DEV_TEST to run # Subtest: Prod 1..0 # SKIP Set PROD_TEST to run ok 3 # skip Set PROD_TEST to run # Subtest: Dev ok 1 - OHAI, DEV! 1..1 ok 4 - Dev 1..4 $ prove runner.t runner.t .. ok All tests successful. Files=1, Tests=4, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.02 cusr + 0.00 csys = 0.05 CPU) Result: PASS $
Re: Test runner that acts like a test and can be run as one
by Your Mother (Archbishop) on Apr 02, 2020 at 18:43 UTC

    Thank you very much to the comments above. hippo’s answer snapped me out of my fog and I also did some reading on the various harness libs and such which might come into play another day but for this one, I think (something approximately like) this as a runner is all I need–

    use strictures; use Test::More; my @config = qw/ PROD DEV /; for my $env ( @config ) { my $test = join "_", $env, "TEST"; local $ENV{$test} = 1; subtest "$env" => sub { do "run.t.raw"; # The raw test reports its plan. }; } done_testing( scalar @config );
Re: Test runner that acts like a test and can be run as one
by Anonymous Monk on Apr 02, 2020 at 08:16 UTC
      Look are deceiving, same issue remains
Re: Test runner that acts like a test and can be run as one
by cxw (Scribe) on Apr 02, 2020 at 18:31 UTC
    Not knowing the details of your tests, I'm not sure if this is directly applicable. However, I can tell you about something similar I've done. In my Class::Tiny::ConstrainedAccessor, I need to run the same tests against about five different upstream modules. I put the tests in a common module. I then have a separate *.t file for each upstream module (each test condition). Each *.t file loads the common module and the upstream module, then passes the upstream to the test runner in the common module (example). The common module does not have a test plan or a done_testing() call, so it fits seamlessly into the framework provided by the *.t file.