stevieb has asked for the wisdom of the Perl Monks concerning the following question:

First, thanks for the feedback on part I. As I wait for Perl Medic to arrive, I've been reading some other documentation.

Things seem to be clicking a bit. Along with keeping my POD up-to-date, I've also been writing tests. I've slowly been finding it easier to write tests into proper test files for my modules, but there are some things I still find easier to write one-offs for. I'd like to eliminate this behavior completely.

Some of my processes have a high number of levels in it's stack trace. Some methods delete entries from an external DB, and create another record with updated data. Some rely on external information that the program doesn't know at runtime. The side-effect of this is having to manually edit test vars frequently.

My questions are as such: does it make sense to use Storable to store() known data in a sub-directory in the t/ dir, in order to 'simulate' results, so I don't have to manually edit test files? Theoretically, if my database schema doesn't change, then loading test data should be ok. This way, I don't have to code db access into all of my test files, and so long as I use known good data, I can be confident when I write a test before I write the sub. Does this sound reasonable? Leaving the test data in the distribution in such a fashion?

Steve

  • Comment on Help with design philosophy II: Testing

Replies are listed 'Best First'.
Re: Help with design philosophy II: Testing
by Sewi (Friar) on Sep 05, 2009 at 10:00 UTC
    Using static data will make your tests easier to write and easier to maintain - as you noticed - but it will also make your tests unable to find any problem between your module and your database layout which could cause big problems.

    I suggest deciding this on a per-test basis:

  • For functions which don't really require DB access, use static data and static results.
  • For functions which do a lot in your DB (which must be cleaned up after the test), create a test-DB and work there. Create the table(s) or data at the beginning of the test and drop the whole thing after the test is done. Depending on how you write it, it may limit you to one simultanus run of each test or a group of tests with use/change the same data, but this isn't a problem in most cases.
  • Some functions do few operations which could be easily reverted should work in their real environment, even if you need to clean up things after the test.
  • You could also redefine subs to make testing easier.
  • Redefining A simple sample: We had a function for error handling which could trigger three levels of action. When running tests and forcing some errors to check if they're detected, many lines were written to the error log on our dev server (no problem), many mails were written to our dev group postbox (annoying) and sometimes even emergency-procedures were started (cost money).
    We solved this by redefining the actual error handling function in the test script:

    [...] # Init tests, prepare everything, etc. require 'error.inc'; # Here are the regular handling functions defined +, Log_Error is one of them our $ErrorText; our $ErrorLevel; eval { sub Log_Error { ($ErrorText,$ErrorLevel) = @_; return 1; } };
    (I know, this doesn't use modules, but there shouldn't be any difference.)
    This loads the regular handling include (which means you don't run into missing subroutines like you may when faking %INC) and than overrides the subroutine handled by the require'd file with a own. Now, any function which may or may not throw an error could be followed by:
    # for expected errors: ok($ErrorText eq 'foo','Error-checking'); ok($ErrorLevel eq 'mail','Error-level-checking'); # for tests which shouldn't trigger one: ok($ErrorText eq '','Check for errors');

    Test-DB Create a environment specially for the test. If you're using SQL, make the DB name or table name(s) configurable and overwrite the configured values with fixed values for the test. Use the DB/environment creation functions you created for your project, so your test environment will be up-to-date everything. Drop everything which is created automatically after running the test. This may be done by "DELETE * FROM Test_Table" which should be easier than searching a real table for the test-created records.

    Make tests self-defining In few cases, you could read things from your systems parameters or databases and make your tests behave based on the things read. I don't have a good example in mind right now, but just think of a test checking for the number of configuration options written to a config file: The test will change each time you add an option, but if you have a reliable source for the options, you could read the list and check if every option defined is really written. This would change the number of tests dynamically, but this is possible. No a good sample, sorry.

    Finally, the answer in most cases should be yes: You need to change your tests on many system changes. More tests mean less trouble, but means updating more files when changing something.

Re: Help with design philosophy II: Testing
by tmaly (Monk) on Sep 08, 2009 at 14:46 UTC

    Storable has different versions that do not always mix well.

    If the person has a different version installed, they may not be able to read the test data in your distribution. Consider using a different format that is more cross platform and version independent