GrandFather has asked for the wisdom of the Perl Monks concerning the following question:

I'm embarking on writing a test suite for the HTML renderer used by PerlMonks Editor. I could check the rendered HTML by performing string matches against the expected result, but this is HTML and that is a fragile way of performing the checking.

I could use a module such as HTML::Diff which provides semantic differencing between two chunks of HTML, but that introduces an extra dependency that is only required for testing.

I could "repackage" key pieces of HTML::Diff to do the work I need, but that could be a lot of work and may be dodgy with respect to licencing.

My inclination at the moment is to go the small dependency but possibly fragile route of pretty much just directly comparing output with expected output. Is there a better solution that I have overlooked?


DWIM is Perl's answer to Gödel
  • Comment on Should I use modules to augment testing

Replies are listed 'Best First'.
Re: Should I use modules to augment testing
by adrianh (Chancellor) on Aug 13, 2006 at 10:27 UTC
    My inclination at the moment is to go the small dependency but possibly fragile route of pretty much just directly comparing output with expected output. Is there a better solution that I have overlooked?

    Personally I would add the dependency. Better diagnostics for me outweigh any extra installation pain for the user :)

    Other ideas:

    • Don't package the dependency and SKIP the tests if it's not installed.
    • Abstract out the comparison to a separate function and call HTML::Diff if that's installed, otherwise do a straight comparison. Take a look at the Test::Differences docs for an example.
Re: Should I use modules to augment testing
by mirod (Canon) on Aug 13, 2006 at 11:11 UTC

    My usual solution is to eval / require the testing module during the tests, and to skip the tests if it is not available. I also document what is needed (my last test lists all the modules used during the tests, with the reason they are used).

    I even go as far as having a function that lets me pretend that the module is not avilable, in order to be able to run the tests in several combinations (or inevitably I will miss some combinations, or skip the wrong number of tests).

    Here is the snippet of code I use (comments welcome, and of course, once a module has been _used, it's too late tp disallow its use):

    { my %used; # module => 1 if require ok, 0 otherwise my %disallowed; # for testing, refuses to _use modules in this hash sub _disallow_use { my( @modules)= @_; $disallowed{$_}= 1 foreach (@modules); } sub _allow_use { my( @modules)= @_; $disallowed{$_}= 0 foreach (@modules); } sub _use { my( $module, $version)= @_; $version ||= 0; if( $disallowed{$module}) { return 0; } if( $used{$module}) { return 1; } if( eval "require $module") { import $module; $used{$module}= 1; + no strict 'refs'; if( ${"${module}::VERSION"} >= $ve +rsion ) { return 1; } else + { return 0; } } else { $used{$module}= 0; + return 0; } } }
Re: Should I use modules to augment testing
by jkeenan1 (Deacon) on Aug 13, 2006 at 14:43 UTC
    Include the dependency.

    As discussed in this recent posting on perl.module-authors, this is not something I always recommend or practice. I've done it different ways for different reasons.

    But given: (a) that your code's intended audience is likely to be very Perl- and CPAN-savvy and (b) that you are already requiring a lot of non-core CPAN modules in the production code -- modules that are much less familiar (to me, at least) than HTML::Diff, I see no reason in this case to be shy about introducing a test-time dependency.

    Jim Keenan