Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical

Author tests or standard tests?

by nysus (Parson)
on Nov 06, 2018 at 17:00 UTC ( #1225311=perlquestion: print w/replies, xml ) Need Help??

nysus has asked for the wisdom of the Perl Monks concerning the following question:

Is there some common wisdom out there on what tests should go in the extra tests directory (/xt) and which tests should go in the regular test directory (/t)? I know the xt dir is generally used for "author" tests. My thought is only tests that can only possibly be run by author using some advanced set up/configuration should go in extra tests (maybe using some OAuth key an end user wouldn't have, for example). I searched the "Perl Testing: A Developer's Notebook" book on Safari and it doesn't even mention the phrase "author tests" or "xt" or "extra tests" which has me wondering about the importance of the distinction. Thanks!

$PM = "Perl Monk's";
$MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate Priest Vicar";
$nysus = $PM . ' ' . $MCF;
Click here if you love Perl Monks

Replies are listed 'Best First'.
Re: Author tests or standard tests?
by eyepopslikeamosquito (Bishop) on Nov 06, 2018 at 19:37 UTC

    Some historical background, in case it is of some use.

    The Oslo Consensus (May 2008)

    • xt/ directory for release and other non-install-time tests (subdirectories optional)
    • Support 'requires => { perl => }' and extend to to all 'requires' types
    • *.PL should generate META_LOCAL.yml with requirements after dynamic configuration

    The Lancaster Consensus (April 2013)

    See The Lancaster Consensus and The Annotated Lancaster Consensus for full details.

    Historically, AUTOMATED_TESTING has been confusing, used for a number of different purposes:

    1. I don't want the user to interact with this test.
    2. This is a long-running test.
    3. This test depends on an external website (say) and I don't want to stop the user installing if it fails, but I want to see what CPAN smokers experienced.

    The Lancaster Consensus clarifies the semantics of AUTOMATED_TESTING and RELEASE_TESTING and adds three new environment variables, making a total of five:


    To run module tests after installation, use new target "make test-installed", equivalent to "make test" but without adding blib to @INC.

    Some Related CPAN Modules

    See also: Perl CPAN test metadata in addition to The Oslo Consensus and The Lancaster Consensus covers The Berlin Consensus (2015) and PTS Oslo (2018)

      I'm glad this thread got posted. I'd never heard of this "Lancaster Consensus" before. Took a brief glance before going into a meeting here, but it definitely looks interesting enough to read later today.

Re: Author tests or standard tests?
by stevieb (Canon) on Nov 06, 2018 at 17:31 UTC

    Personally, I don't use the xt directory. All of my tests regardless of the context(s) they run, are all in the t/ directory (in some cases I have multiple levels of directories under t/).

    Typically, POD checking tests, MANIFEST tests, critic tests and pretty much any test that checks something that doesn't have any impact on the compilation, installation or operation of your distribution could be classified as an author test.

    I use the environment variable RELEASE_TESTING to enable/disable my author tests. I typically only run them when I'm about to tag/upload to CPAN a new release. I usually skip author tests like this:

    use strict; use warnings; use Test::More; unless ( $ENV{RELEASE_TESTING} ) { plan( skip_all => "Author tests not required for installation" ); } my $min_tcm = 0.9; eval "use Test::CheckManifest $min_tcm"; plan skip_all => "Test::CheckManifest $min_tcm required" if $@; ok_manifest();

    To further, several of my distributions have numerous other tests that skip under certain scenarios during install, as the end-user may not have all of the requisites configured or installed. For example, I have a dedicated hardware setup for continuous integration/testing for my RPi::WiringPi suite with all sensors and ICs attached. Most won't have this on install, so in my documentation for said distribution, I have:

    Testing Environment Variable List Here's the contents of my /etc/environment file, setting the various t +esting environment variables for the full test platform. For LCD, the + last two digits (4, 20) are for four row, 20 column units. If you on +ly have a two row by 16 column unit, leave those last two digits off. PI_BOARD=1 RPI_ARDUINO=1 RPI_ADC=1 RPI_MCP3008=1 RPI_MCP4922=1 RPI_SHIFTREG=1 RPI_LCD=1 RPI_SERIAL=1 RPI_HCSR04=1 BB_RPI_LCD=5,6,4,17,27,22,4,20 RPI_RTC=1 RPI_MCP23017=1 RELEASE_TESTING=1

    That tests the entire platform and all connected devices, ICs, sensors etc, including my author tests. None of these are set in a user's environment by default, so the most basic of tests will pass on any platform, skipping the ones that can't be tested.

    To skip them, it's very similar to the author tests:

    BEGIN { if (! $ENV{RPI_ARDUINO}){ plan skip_all => "RPI_ARDUINO environment variable not set\n"; } if (! $ENV{PI_BOARD}){ $ENV{NO_BOARD} = 1; plan skip_all => "Not on a Pi board\n"; } }

    The test output is displayed like the following (some cleanup for brevity's sake) when not running on a Raspberry Pi:

    PERL_DL_NONLAZY=1 "/home/pi/perl5/perlbrew/perls/perl-5.28.0/bin/perl" + "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harne +ss::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00-load.t ....................... ok t/01-identification_and_label.t ... skipped: Not on a Pi board t/05-pin.t ........................ skipped: Not on a Pi board t/10-register.t ................... skipped: Not on a Pi board t/15-pwm_spi_adc.t ................ skipped: RPI_ADC environment varia +ble not set t/20-cleanup.t .................... skipped: Not on a Pi board t/25-sig_die.t .................... skipped: Not on a Pi board t/35-pin_map.t .................... skipped: Not on a Pi board t/40-interrupt_rising_and_pud.t ... skipped: Not on a Pi board t/41-interrupt_falling_and_pud.t .. skipped: Not on a Pi board t/42-interrupt_both_and_pud.t ..... skipped: Not on a Pi board t/45-shift_reg_adc.t .............. skipped: RPI_SHIFTREG environment +variable not set t/55-dac.t ........................ skipped: RPI_MCP4922 environment v +ariable not set t/60-lcd.t ........................ skipped: RPI_LCD environment varia +ble not set t/64-i2c_exceptions.t ............. skipped: RPI_ARUDINO environment v +ariable not set t/65-i2c.t ........................ skipped: RPI_ARDUINO environment v +ariable not set t/67-rtc.t ........................ skipped: RPI_RTC environment varia +ble not set t/70-alt_modes.t .................. skipped: Not on a Pi board t/75-serial.t ..................... skipped: RPI_SERIAL environment va +riable not set t/80-mode_state_all_pins.t ........ skipped: Not on a Pi board t/85-pwm_hw_mods.t ................ skipped: Not on a Pi board t/90-servo.t ...................... skipped: Not on a Pi board t/92-mcp23017.t ................... skipped: RPI_MCP23017 environment +variable not set t/95-pod_linkcheck.t .............. skipped: Test::Pod::LinkCheck requ +ired for testing POD links t/manifest.t ...................... skipped: Author tests not required + for installation t/pod-coverage.t .................. skipped: Author tests not required + for installation

    The third one from the bottom is both an author test file (pod_linkcheck.t), and it has a prerequisite. I don't want to include unnecessary distributions if I don't have to, so on my testing platforms, the prereq is installed, so if RELEASE_TESTING=1 is set, that test file will run. Here's how I check for the distribution's availability:

    use strict; use warnings; use Test::More; eval "use Test::Pod::LinkCheck"; if ($@) { plan skip_all => 'Test::Pod::LinkCheck required for testing POD'; } if (! $ENV{RELEASE_TESTING}){ plan skip_all => 'Test::POD::LinkCheck tests not required for inst +all.'; } Test::Pod::LinkCheck->new->all_pod_ok;

    Not the best way to use eval(), but it doesn't matter. Any eval error will suffice here to skip the tests.

      Very helpful. And I totally forgot about tests like perlcritic and what not as I haven't explored those modules yet. This clarifies things nicely for me. Thanks.

      $PM = "Perl Monk's";
      $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate Priest Vicar";
      $nysus = $PM . ' ' . $MCF;
      Click here if you love Perl Monks

Re: Author tests or standard tests?
by 1nickt (Abbot) on Nov 06, 2018 at 18:11 UTC

    Hi, I think you are on the right track. It's not that xt == author. The main thing is that Perl module installers like cpanm will not run tests in /xt. Put tests in /xt that should not be run on installation of the module. I've always used them to test post-installation functionality. So an end user might still run them, but to test that the just-installed client can connect to the server, or to run a test that depends on a given DB being available, etc.

    Hope this helps!

    The way forward always starts with a minimal test.
Re: Author tests or standard tests?
by bliako (Monsignor) on Nov 07, 2018 at 11:43 UTC

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1225311]
Front-paged by haukex
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others about the Monastery: (6)
As of 2022-01-26 17:37 GMT
Find Nodes?
    Voting Booth?
    In 2022, my preferred method to securely store passwords is:

    Results (69 votes). Check out past polls.