Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re: RFC: Tutorial on Testing

by BrowserUk (Patriarch)
on Sep 18, 2004 at 12:42 UTC ( [id://391986]=note: print w/replies, xml ) Need Help??


in reply to RFC: Tutorial on Testing

I skipped by this the first time around. I think your original title put me off. I'm very glad it was re-titled. I'm even more glad that I took the time to read it.

This is a test tool worthy of occupying space in the rather lack lustre Test::* namespace--even if the name itself is somewhat mysterious :)

I think that this could easily have been called Test::Smart. In keeping with a local advertising slogan that reads: "Work smarter, not harder".

It's too early days yet in my understanding of Test::LectroTest (So good, it tests twice many times? :), for me to have found the answers to these questions myself, so I'll ask them here before going on to read more:

  1. It feels like it would be possible to combine the logic of Lectro into the actual code such that the generator bindings could double serve as parameter verifications?

    My thoughts here are that if the bindings were embedded within the functions themselves and served as parameter checks (possibly disable-able for production), they are more likely to stay in step with changes in the specification of the function over time.

  2. I've had a fairly long history with using testcase generators (some further infoRe: Software Design Resources & Re: Re: Re: Software Design Resources etc.), and with writing them. One of the major benefits of using (directed) random testcase generators is that it is possible to infer some "measure of goodness" statistically from the number of testcases run -v- bugs found.

    For this to be properly effective, it requires not just the overall tests run count, but an analysis of those tests to produce a measure of coverage. For a function taking integers as input, the range is finite and quantifiable, and by accumulating the actual values used in generated tests, it becomess possible to derive a coverage statistic. This is much harder for other types of parameter with continuous input ranges, but even these can often be quantified on some basis, relative to the code under test.

    The key here is that it requires that generated parameters be logged and acculmulated. Is there any intent to provide this type of facility?

  3. Finally, your chosen example is very good. It allows you to demonstrate the benefits of the approach with something that is apparently simple, but for those of use that have forgotten the schoolboy diagrams we drew that showed just how inconvenient it is doing moduler math on angles, it allowed us to be delighted by the step-by-step revelations :)

    However, in it's simplicity, it suffers from not showing how difficult it can be to choose the correct ranges for sampling. I'm going on to read the rest of the docs, but do you have any advise/tutorials on how to go about selecting ranges for where the inputs are more complex (strings, arrays, hashes etc.)?

Overall, The tutorial was extremely readable--which I consider very important in such material; methodical accuracy is of little good if noone reads it--, and a damn good advert for the both the module, but more importantly, the methodologyy it uses.

I've expressed my doubts over the efficacy of the types of testing possible using most of the Test::* modules. Part of the problem is that this is another of those areas where "more" rarley equates to "better".

It is also the case that when tests are coded by the same person that writes the code, they tend to concetrate the tests on those areas of the code that they spent most time thinking about. Often this works out to be the areas that they had most trouble coding. Inevitably, it's the areas that they spent least time thinking about that need most testing.

In an ideal world, we would all have a test engineer to design/code our tests for us, but that is an increasingly rare situation. Using a test module that takes input in a generalised form and then randomly distributes the tests is a real step forward (IMO).

If this methodology can be combined with DBC constraints, it would further lift the testing away from the assumptions of the coder. Allowing the designer (often as not, the coder wearing a different hat) to specify the function in terms of constraints applied to the inputs, and then allow a module such as this to take over the testing from there.

Ultimately, if the actual test values could be accumulated and analysed, you would have the basis of a viable quality metric.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://391986]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (None)
    As of 2024-04-25 00:00 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      No recent polls found