pileofrogs has asked for the wisdom of the Perl Monks concerning the following question:

I'm relatively new to developing perl modules with Test::Harness type tests, and I feel like I'm missing something.

Basically, I can write Test::Harness tests, but they only tell me good v. bad with very little detail. I like to have a lot of details while I'm actually developing a module. As a consequence, I'm writing two different test scripts (or batches of scripts). One that I use while I'm developing and one that is for packaging with the module and running in Test::Harness.

Is there a way to write Test::Harness tests that can give me lots of details when I'm running them by hand, yet work as expected when run by "make test"?

I'll try and clarify a bit what I mean by more detail. I might want to "print ref($thing)" from time to time. I might want to "print $total_score". Sure, I could "warn $total_score", but that would make it look terrible when testing with "make test". I know there's "make install TEST_VERBOSE=1", which is better, but still not good enough. I'd be happy to "special_print( ref($thing) );" rather than "print ref($thing)", if that's what it takes.

Thanks!
--Pileofrogs

Replies are listed 'Best First'.
Re: Testing Question
by tirwhan (Abbot) on Feb 24, 2006 at 19:39 UTC

    The Test::More diag function is intended for printing out diagnostics during a test run (which I think is what you're after). Also, to output all the test names during a run you can execute

    prove -vb t/

    instead of make test. That being said, I don't think I entirely understand your purpose here, as far as I'm concerned the test suite is supposed to consist of a list of simple checks, to make it easier to determine the cause (or at least locality) of a failure. Printing out debugging output during a normal test run doesn't seem particularly helpful to me (ok, if you're in the process of debugging a particular problem perhaps, but I'd remove that output once I've solved the problem)


    All dogma is stupid.

      I find debugging output really helpful during development. I don't want to write-remove-write-remove debugging code ad-nausium. Plus, I often find that my problem really comes from somewhere other than the place where the symptoms are. If foo($thing) is breaking, but the problem is in the part that created $thing, I'd be much better off with complete debugging output, rather than having to put a bunch of stuff into foo(), just to discover that I need to put a bunch of stuff elsewhere. With reasonable debugging output, I'd already have both.

      Thanks for the pointers on prove and diag!

        If foo($thing) is breaking, but the problem is in the part that created $thing

        Then this should be caught by the tests for that part. That's kinda my point, by relying on debugging output which you inspect manually you're hiding the complexity of your program and missing out on vital tests. You are obviously smarter than your test suite and you can reason "ok, this is the output I was expecting, because I know of this side-effect which I'm not testing for otherwise". But this grows too complicated really quickly and IMO it'd be much better to write simple tests for everything so there are no untested side-effects. This way you don't risk missing out on an unexpected value.

        Obviously we all have our own methodology and I'm not trying to foist my way of thinking on you or tell you that you're wrong. Just trying to explain how I've found things work best for me.


        All dogma is stupid.
Re: Testing Question
by xdg (Monsignor) on Feb 24, 2006 at 22:02 UTC

    My suggestion is to write more tests and label them very well. Then use prove -vb to run them when you want to see all the detail. Another good approach is to use diag but only when a test fails:

    is( $thing->foo(), 42, "foo is 42" ) or diag $thing->as_string();

    More generally, you may want to rethink your testing structure (or even your module structure). Personally, I like to do test-driven development. I write the tests first for each function, defining the expected outputs for a range of valid inputs to each argument. That lets me play-test the API up front. If I hate the API as I write the tests, I can refine it before I've even coded anything. Then I code it and make sure it works. Then I write tests for how the function handles invalid inputs.

    This may seem like a lot of work, but it doesn't have to be if you put your inputs and outputs in a data structure and let your tests run in a simple loop. I elaborate a bit on this in Re^2: Wanted, more simple tutorials on testing.

    If you feel that there are too many possible outputs to test them all, you may need to break your function down into several smaller units, each of which is easier to test. If you feel you need to trace what's happening in the middle of your code, that might be a warning sign that you're testing at too high a level.

    If I still can't figure out why a test isn't working from examining the output of a function that is failing a test, that's when I use the debugger to jump to that part of my test script and then step through it. Sadly, prove doesn't make it easy to run the debugger</c>. You have to set an environment variable:

    HARNESS_PERL_SWITCHES="-d"

    Alternatively, if you're using ExtUtils::MakeMaker or Module::Build, they will set it for you:

    $ make testdb TEST_FILE=t/mytest.t # ExtUtils::MakeMaker $ Build testdb --test_files t\02_trivial.t # Module::Build

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      My suggestion is to write more tests and label them very well.
      Yes. And to find out what the "more tests" should be, make friends with Devel::Cover. It'll show you exactly what you are and are not testing: subroutines, conditionals, and branches.
Re: Testing Question
by BrowserUk (Patriarch) on Feb 24, 2006 at 21:22 UTC

    For tracing progress, watching variables, and adding development time assert etc. in code in a way that you can disable easily (change one line), for production and know that they will have zero effect on it's performance, take a look at Devel::StealthDebug.

    Cutsey name (though I cannot think of a better one), and it is a source filter, but the source filtering only happenss when enabled, so not in your production code. The statements remain behind as simple comments ready for when the bug reports arrive.

    I'm not sure about how it would interact Test::Harness.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.