http://qs1969.pair.com?node_id=578279

Testing has been a bit of a problem for me over the last few months, as some will have heard from my CB rants. Not the 'make test and watch the TAP output come pouring out' type of testing - oh dear no. For much of the last 8 months, I've been doing a great deal of manual testing, of the 'type these commands, compare 50 fields in the database output with the 50 fields printed in the test script, putting a p (pass) or f (fail) next to each one, sign and date, then go on to the next one of the 150 tests' type. Yuk! Human beings did not evolve to do this kind of thing.

So I enthused about automated testing, tried to get my colleagues on side (with a degree of success), but there is one major stumbling block. The Quality Assurance team. Correctly, they're independent. Unfortunately, they don't permit automated testing unless it's done with an approved, validated (company validated that is) automated testing tool (TestDirector, for example). They're also entirely non technical, and really only concerned with the quality (in terms of change tracking, consistency etc) of documents. The net result of this is that testing is an extraordinary time overhead, and we have to think carefully about what tests to run for a given release. This means that testing is not as thorough as it could, or should be, and bugs creep through. Not as many as you might expect in this situation, but nevertheless more bugs find their way into production than I consider acceptable.

This stalemate has been going on for some time. Years actually. Then on monday something happened. A big, fat bug in some of my code showed up in production. Embarassing. This bug meant that I now have to run a manual report daily for the next couple of weeks until we can patch, to take the place of the automated report that I broke. Embarassing and irritating, especially since another bug had been emergency fixed that morning.

At that point I realised that, just like the QA people, I'd lost sight of the real issue - testing is about finding bugs, not filling in forms. If the formal, QA approved testing is less thorough than it should be, we have to make sure that the code gets properly tested some other way.

So I got to work writing unit tests with Test::More.

3 days work later I've got one of the components up to 50% test coverage and found 3 bugs in edge cases that have never showed up in production. Unfortunately we probably can't test everything this way, since the perl code is only one component, running in an embedded perl interpreter inside a proprietary application. Integration testing still needs to be done the old way, so our test overhead has gone up by the amount of effort needed to write unit tests, but at least the chances of bugs getting through is reduced.

Another advantage of testing with the Perl testing modules is the availability of Devel::Cover. Because the unit testing is informal and unvalidated, test cases can be added any time. If someone has a few minutes spare, a quick run of the test suite with Devel::Cover will show up opportunities for improving the testing.

Something else I'd lost sight of is the fact that we primarily want to test our code, not someone elses. A lot of our code depends heavily on Net::LDAP, so the need to provide a correctly configured directory server looked like a barrier to automated testing. However, end to end integration testing covers the 'get data back from the directory server' test case. If there's no directory server easily available for unit testing, we can invade the dependency's name space to let us test our own code:

use strict; use warnings; use Test::More; require 'MyCode.pl'; *Net::LDAP::bind = \&ldapbind; *Net::LDAP::new = \&ldapnew; MyCode::bindToLDAP("hostname","port","cn=binddn","password"); sub ldapnew { my $host = shift; cmp_ok($host,"hostname:port","Check that Net::LDAP::new receives t +he right params"); } sub ldapbind { my %params = @_; my %comparison = { dn => "cn=binddn", password=> "password", }; is_deeply(\%params,\%comparison,"Check that Net::LDAP::bind gets t +he right params"); }

I'm hoping I can get the vendor of the core application to give us information on externally accessing the test functions in their application via XS, so that we can extend the unit tests to include the application config. I'm not hopeful on that front, but it's worth a try.

Unfortunately testing this way also doesn't remove the requirement to do the formal testing in the old way, so the drudgery remains, but at least the code is being tested properly and the chances of embarassment are that much smaller.

One final note: in the mindless drudgery of manual testing, I'd also forgotten how much fun one can have writing tests to try and break things :-)

--------------------------------------------------------------

"If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
John Brunner, "The Shockwave Rider".