in reply to Re: Self-testing modules
in thread Self-testing modules

This solution will not allow you to use modules (or do other compile time work) cleanly. Any use statement (no statement, BEGIN block, END block, etc.) will always get executed no matter how the enclosing module is used.

The previous method (__END__;#!perl + perl -x module.pm), while more cumbersome in some ways, lets you do compile time work as normal.

--DrWhy

"If God had meant for us to think for ourselves he would have given us brains. Oh, wait..."

Replies are listed 'Best First'.
Re^3: Self-testing modules
by BrowserUk (Patriarch) on Jul 22, 2005 at 20:54 UTC
    Any use statement (no statement, BEGIN block, END block, etc.) will always get executed no matter how the enclosing module is used.

    That is true, so I don't do that. I don't use the Test::* modules as they tell me what passed rather than what failed. I also have very definite ideas about the form that unit testing should take and that does not fit well with the pattern of a zillion ok()/not_ok() tests that those modules encourage.

    If my test code needs additional modules, I require them not use them. On the rare occasions I've felt the need for BEGIN/INIT/CHECK/END blocks, they've been an inherent part of the module not the testcode.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      I don't use the Test::* modules as they tell me what passed rather than what failed. I also have very definite ideas about the form that unit testing should take and that does not fit well with the pattern of a zillion ok()/not_ok() tests that those modules encourage.

      I've been in a QA a position for only a few months (though I've been coding Perl for years) and am intersted in new perspectives on testing software. Would you care to tell me what modules you do use in testing and expand more on your perspective on testing?

      --DrWhy

      "If God had meant for us to think for ourselves he would have given us brains. Oh, wait..."

        See Test::LectroTest for the only cpan test module (I consider) worthy of the Test:: prefix.

        Simplified rational:

        Most bug arise as a result of the programmer making assumptions. If the same programmer writes the tests for the code s/he wrote, they will make the same assumptions. The net result is they write tests for every case they considered when writing the code, which all pass--giving them N of N tests (100%) passed and a hugely false sense of security.

        The cases they fail to tests for, are the same cases they failed to consider when writing the code, and those are normally the same cases that crop up immediately they demo it or put it into production.

        With anything other than the most trivial of functions, hoping to test all possible combinations of inputs and verify those outputs is forlorn. Ie. impossible for any practical sense of the term.

        Therefore, the only way to test code is to test is complience against a (rigourous) specification, and derive security through statistics. Ideally, this would go one step further than LectroTest and retain a record of failing values and these would be reused (along with a new batch of randomly generated ones) at each subsequent test cycle. (IMO) this is the only way that testing will be lifted out of it's finger-to-the-wind, guesswork state and move into something approaching a science.

        LectroTest isn't perfect (yet). It has fallen into the trap of becoming "expectation complient" in as much as it plays the Test::Harness game of supplying lots of warm-fuzzies in the form of ok()s, and perpectuating the anomoly of reporting 99.73% passed instead of 0.27 "failed", or better still:

        ***FAIL*** Line nnn of xxxxxx.pl failed running function( 1, 2, 3 ); Testing halted.

        Preferably dropping the programmer into the debugger at the point of failure. Even more preferably, in such a way that the program can be back-stepped to the point of invocation and single stepped through the code with the failing parameters in-place so that the failure can be followed.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.