in reply to Using Test modules in production scripts

The main reason not to go this route is that you take away the caller's control over what gets spit out in the case of a test failure What if the caller simply wants to know that the file failed to open and can try another file or just tell the user? Would you really want to log a bunch of TAP output spitting out at the terminal even when the caller considered the exception irrelevant or capable of recovery?

Secondly, I don't see tests outside of a program as duplication at all. They have a very different purpose. When I write tests I'm usually testing a wide variety of inputs to a function. I don't assume an exception is bad, rather I may deliberately provoke exceptions. I want to test to make sure that an exception is thrown when conditions warrant it. Ironically, in some cases a test is passed only when the exception is thrown.

Perhaps your actual goal here is logging? If so, I think you might be better off acquainting yourself with a module like Log::Log4Perl which gives you a permanent record (log file) and a lot of control over how much or how little output is generated and where it is stored.

Or maybe you are inspired in this direction because of the rich assortment of assertions that have already been programed for you and wrapped in test modules? In some cases these can be had without using the test module. For example, Data::Compare can compare arbitrary Perl data structures and Data::Match compares your structure against a pattern.

  • Comment on Re: Using Test modules in production scripts

Replies are listed 'Best First'.
Re^2: Using Test modules in production scripts
by xssnark (Initiate) on Feb 14, 2011 at 22:57 UTC
    Hello Elisheva,

    Thank you for your advice.

    My goal is to provide testing such that the application can be deployed to a new platform and problems will be caught before it has a chance to damage anything.
    This question came to mind because I'm using the app tests to write the t/ tests. Perhaps I could simply reduce the amount of duplicated code between the two. Having test code in two locations means that I'll have to maintain test code in two locations.
    It seems youwin's suggestion in post # 888082 is probably the correct approach for reducing the redundant test code. If the t/ tests are sufficient, the tests in the app will be unnecessary.

    Thanks Again. X.

      xssnark:

      For production deployments, I've been toying around with having the script automatically run the test suite any time the script is newer than the name of the test script. So when I modify a program or module, it will automatically run the test suite. Only after I'm satisfied with my changes do I touch the test script. If I accidentally make a change without updating my test script, I'm immediately alerted by the fact that it runs the test suite. (Note: my toy scripts that I've been experimenting with don't prevent the script from running, it just runs the test suite before normal operation.)

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

      Not a good idea. The tests are run at deployment, but what if someone deletes that directory later? Then your application doesn't work correctly anymore.

      You are optimizing at the wrong place. That directory test costs you about 30 bytes of hard disk space and about 100 microseconds per run of your application (probably less since the directory is accessed later on). If you think this is wasted time and space, you should switch to machine language programming.

      I just wrote a script last week where I tried to test everything twice if possible so as to make sure no error condition escaped unnoticed. It was a script to format and fill a SD-card, so for example I had two different ways to get the size of the device, just to be sure I don't make a mistake here.

      As long as tests are not in time-critical parts of your application, they are your best friends, an asset and a pillow for a peaceful sleep.