hesco has asked for the wisdom of the Perl Monks concerning the following question:

I'm working on a tool called Test::MonitorSItes, which runs basic tests of the availability of a collection of websites defined in a configuration file, and reports test summaries by email and critical failures by sms messenger to administrator's cell phones.

Right now my test suite does the ET-phone-home thing because the t/testsuite_*.ini files are littered with my email address as a default.

I'm releasing version 0.07 tonight, check a cpan mirror near you soon. I've got code commented out in my Makefile.PL script which prompts for an email address to use for the tests. But I don't know of an elegant way to propogate those user defined settings into the .ini files which drive the tests. Any ideas? That's question one.

Question two is related to a question I asked last month, Testing a Test:: module: How can I get my t/14_cover_conditions.t test going here?

When I run it with perl t/14_cover_conditions.t, I get two failures from 19 tests. When I run it as prove t/14_cover_conditions.t, I get five failures from 19 tests. I'm using Test::Builder::Tester, and getting especially confused with this process of testing code which is testing code.

The readmore tags hide the test code plus the prove and test output. Any ideas would be appreciated.

-- Hugh

UPDATE

To respond to the comments below, Last month's post about issues with t/12_exercise_test_sites_method.t led me to the use of Test::Builder::Tester. This question is more about how to appropriately use that module. The code in my Makefile.PL script only collects the user input using the prompt() method. Getting the data is not an issue. I have no code to show, because I have no elegant way in mind for then propogating that user input into the t/*.ini files which need to have this data to make the test suite use them. My first question is: Is there an elegant way to take user input and embed it into the existing files in a distribution?

As suggested by chromatic, I have appended the output from those additional tests. In fact I have updated the perl t/14_ and the prove t/14_ output to reflect the current state of development. Thanks for the feedback. Yes, those two failing tests are listed as TODO, but only so they need not prevent a successful automated install. In reality they need to be somehow debugged, as I believe they should now be passing and can not for the life of me understand why they are not.

Running test with prove:

t/14_cover_conditions....ok 1/19 # Failed (TODO) test 'Test suite run without results_recipient defin +ed.' # at t/14_cover_conditions.t line 99. # STDOUT is: # # not: # # # as expected t/14_cover_conditions....ok 13/19 # Failed (TODO) test 'Test suite ran without any critical errors' # at t/14_cover_conditions.t line 123. # STDOUT is: # # not: # ok 1 - Successfully linked to http://www.perlmonks.com. # # ok 2 - . . . and found expected content at http://www.perlmonks.com # # ok 3 - Successfully linked to http://validator.w3.org/. # # ok 4 - . . . and found expected content at http://validator.w3.org/ # # ok 5 - Successfully linked to http://www.cpan.org. # # ok 6 - . . . and found expected content at http://www.cpan.org # # ok 7 - Successfully linked to http://www.campaignfoundations.com. # # ok 8 - . . . and found expected content at http://www.campaignfound +ations.com # # as expected t/14_cover_conditions....ok 1/19 unexpectedly succeeded TODO PASSED test 12 All tests successful (1 subtest UNEXPECTEDLY SUCCEEDED). Passed TODO Stat Wstat TODOs Pass List of Passed ---------------------------------------------------------------------- +--------- t/14_cover_conditions.t 3 1 12 Files=1, Tests=19, 11 wallclock secs ( 0.39 cusr + 0.03 csys = 0.42 +CPU)
Running tests with perl:
1..19 ok 1 - Test suite produced the expected successes and errors. ok 2 - Successfully linked to http://www.perlmonks.com. # ok 3 - . . . and found expected content for http://www.perlmonks. +com. # ok 4 - Successfully linked to http://www.campaignfoundations.com. # ok 5 - . . . and found expected content for http://www.campaignfo +undations.com. # ok 6 - All tests passed, no text message sent ok 7 - Configuration file set send_summary = 0, no email sent ok 8 - Configuration file set send_diagnostics = 0, so diagnostics not + sent ok 9 - Basic tests seem to work. ok 10 - Seems to return the correct result_log not ok 11 - Test suite run without results_recipient defined. # TODO O +n the bleeding edge ofdevelopment . . . # Failed (TODO) test 'Test suite run without results_recipient defin +ed.' # at t/14_cover_conditions.t line 99. # STDOUT is: # # not: # # # as expected ok 12 - No result_recipient defined, so no email will be sent. # TODO +On the bleeding edge of development . . . not ok 13 - Test suite ran without any critical errors # TODO On the b +leeding edge, no critical error report. # Failed (TODO) test 'Test suite ran without any critical errors' # at t/14_cover_conditions.t line 123. # STDOUT is: # # not: # ok 1 - Successfully linked to http://www.perlmonks.com. # # ok 2 - . . . and found expected content at http://www.perlmonks.com # # ok 3 - Successfully linked to http://validator.w3.org/. # # ok 4 - . . . and found expected content at http://validator.w3.org/ # # ok 5 - Successfully linked to http://www.cpan.org. # # ok 6 - . . . and found expected content at http://www.cpan.org # # ok 7 - Successfully linked to http://www.campaignfoundations.com. # # ok 8 - . . . and found expected content at http://www.campaignfound +ations.com # # as expected ok 14 - No critical errors found. ok 15 - No servers had errors. ok 16 - Eight tests were run. ok 17 - Four sites were tested. ok 18 - Sites on four IPs were tested. ok 19 - Tests: 8, IPs: 4, Sites: 4, CFs: 0; No critical errors found.
And finally the code which is running these tests:

TODO: { local $TODO = "On the bleeding edge of development . . . "; $tester->{'error'} = undef; $tester->{'config'}->delete('global.results_recipients'); test_out(''); $tester->test_sites(); test_test( name => "Test suite run without results_recipient defined +.", skip_err => 1 ); like($tester->{'error'},qr/no result_recipient defined/,'No result_r +ecipient defined, so no email will be sent.'); } # diag("Pierre requested report on all success."); $config_file = "$cwd/t/testsuite_all_ok.ini"; $tester = Test::MonitorSites->new( { 'config_file' => $config_file } ) +; test_out("ok 1 - Successfully linked to http://www.perlmonks.com.", "ok 2 - . . . and found expected content at http://www.perlmonks.co +m", "ok 3 - Successfully linked to http://validator.w3.org/.", "ok 4 - . . . and found expected content at http://validator.w3.org +/", "ok 5 - Successfully linked to http://www.cpan.org.", "ok 6 - . . . and found expected content at http://www.cpan.org", "ok 7 - Successfully linked to http://www.campaignfoundations.com.", "ok 8 - . . . and found expected content at http://www.campaignfoun +dations.com"); $tester->test_sites(); TODO: { local $TODO = "On the bleeding edge, no critical error report."; test_test( name => "Test suite ran without any critical errors", skip_err => 1 ); } # exit; is($tester->{'result'}->{'critical_errors'},0,'No critical errors foun +d.'); is($tester->{'result'}->{'servers_with_failures'},0,'No servers had er +rors.'); is($tester->{'result'}->{'tests'},8,'Eight tests were run.'); is($tester->{'result'}->{'sites'},4,'Four sites were tested.'); is($tester->{'result'}->{'ips'},4,'Sites on four IPs were tested.'); is($tester->{'result'}->{'message'},'Tests: 8, IPs: 4, Sites: 4, CFs: +0; No critical errors f ound.','Tests: 8, IPs: 4, Sites: 4, CFs: 0; No critical errors found.' +);
That commented out exit; line was used to check the output written to the /tmp files.

if( $lal && $lol ) { $life++; }

Replies are listed 'Best First'.
Re: Two test suite design questions.
by Anno (Deacon) on Mar 22, 2007 at 12:45 UTC
    I have only a meta-comment, no answers to your questions.

    I think you'd have better chances of receiving a useful reply if you made your questions more self-contained.

    The first one essentially says, "Dear monks, please wait till I get around to releasing my module, download it, find the commented-out section in Makefile.PL and tell me how to make it work". A respondent would have to do that before being able to decide whether he or she has anything reasonable to say about the question.

    Similarly, your second question requires the reader to re-read a month-old thread, digest the replies you got then, and see how that relates to the material you posted now. A quick glance shows that the old thread talks about a test script t/12_exercise_test_sites_method.t while here you mention t/14_cover_conditions.t. Are these the same? Again, there's a non-trivial amount of work involved just to find out what exactly the question is.

    Anno

Re: Two test suite design questions.
by chromatic (Archbishop) on Mar 22, 2007 at 15:24 UTC
    When I run it with perl t/14_cover_conditions.t, I get two failures from 19 tests.

    There aren't any failures in the output you quoted. Failing TODO tests count as passing tests; they're expected failures.

    We'd have to see the results of tests 15 through 19 (and the test code) to give you any further guidance.