Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^5: Does anybody write tests first?

by BrowserUk (Patriarch)
on Feb 25, 2008 at 22:44 UTC ( [id://670174]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Does anybody write tests first?
in thread Does anybody write tests first?

And program code is code. Therefore, if you write no code at all, you'll have no bugs. Of course, you'll also have no features.

Is that a facetious reply? Or did you genuinely think I was not aware of that obvious consequence? :)

On a more serious note. A step of project design that was common years ago but that seems to be missing from too many shops these days is risk/benefit analysis. It is entirely possible, and surprisingly common, that once a project has been show to be possible, and the predicted development effort costed, the biggest ROI possible is to not do the project at all.

The point should be clear. You write just the code required to implement the features you need. And do just as much as is required to test those features.

Writing extra code or tests now, to hedge against future possibilities is wrong. There are three possible outcomes of that extra effort--no matter how little extra it is.

  1. You predicted the future exactly:

    No extra effort is required.

  2. The predicted future possibility never comes to pass:

    The extra effort is wasted.

  3. A different--slightly or wholly--new requirement or feature is needed:

    Not only is that early extra effort wasted.

    The extra effort expended up front, has to be backed out in order to accommodate the new code.

So, simplistic math puts your chances of predicting the future correctly and so benefiting from the extra effort expended as 33%. If you believe that your powers of prescience can do substantially better, give up programming and start playing the stock market or visiting casinos. But, keep quiet about it because your local military sci-ops team are likely to come looking for you in the middle of the night if they get wind of it :)

So is your objection to writing tests first as opposed to after the fact? Or to hacky, poorly-designed tests, regardless of whether they were written first or last? My hypothesis would be that tests are more likely to be designed well when they are viewed by the developer as an integral part of the development of program code rather than something to be added afterwards -- at least with respect to individual developers

My objection (to typical Perl/CPAN test suites) is the prevalent methodology. It is really hard to make a cogent argument on this subject in the abstract.

  • A part of my objection is the effort (and duplication of effort), involved in using the Test::* (TAPI) toolset.
  • A part of my objection is to the sprawling, ad-hoc, undesigned nature of the test suites it produces.

It can be typified by the test suite for DBM::Deep. Let me say here that I think dragonchild has done an amazing job with this module, and his test suite is extensive and thorough. What I am going to be critiquing here is the effort that has gone into its construction, and it's opacity for those coming along to use it after the fact.

Design

Certainly incomplete, but in essence, DMB::Deep allows you to create Perlish hashes and arrays on disk.

  • Just as with memory based hashes and arrays (hereafter called HARRAYs), they can be arbitrarily nested.
  • You can create HARRAYs.
  • You can extend HARRAYS.
  • You can iterate HARRAYs.
  • You can destroy HARRAYs.
  • You can add elements to HARRAYs.
  • You can modify elements in HARRAYs.
  • You can delete elements from HARRAYs.
  • In addition to the tied interface, there is an OO interface.
  • Arbitrary combinations of the above features can be wrapped in transaction brackets.

Okay, so now let's think about a testing strategy to cover that lot. My initial thoughts are:

  1. If I create a HARRAY in ram, as well as the HARRAY on disk, and perform exactly the same manipulations to both, then at any given moment during those manipulations, my pass/fail criteria can be: Does the disk HARRAY match the ram HARRAY?
  2. And by adopting this strategy, I no longer need to hard wire each test so that I know what "output" to expect. That means I can choose my keys and values randomly.
  3. By using randomly generated values, I can pick my ranges and iteration counts:
    • So as to produce some statistically meaningful coverage numbers.
    • To test small and large sized structures.
    • To evaluate worst case performance with pathological datasets--like large numbers of keys that hash to a single bucket.
  4. And for my transaction tests, I can create equivalent ram-HARRAY and disk-HARRAY. Then modify the disk-HARRAY alone inside a transaction that I never close and the ram and disk HARRAYs should remain equivalent at all times.

More would be required, but this is just a reply to a SOPW reply (to a SOPW reply...).

For repeatability, I seed the PRNG with srand.

For regression testing, I redirect the terminal output to a file and compare against an earlier capture using diff.

This strategy allows me to add temporary debug trace without completely screwing up the rest of the testing.

I can drop into the debugger, set a breakpoint, skip over the early tests and walk through the failing test.

At any time I can enable/disable asserts to stop at the point of failure or just log and run on.

At any time I can enable/disable full trace back or just top-level caller traceback.

There have been several replies that say "you can do that to with Test::*/prove/TAPI". That's fine (though many of the can-do-that-to's seem to be very recent additions on the basis of my encounters), but I still question what those tools give me that is extra and useful?

And does that make up for all the things--print, debugger, traceback, remoteness--that they take away? IMO, the only extra they give is a set of statistics that I have no interest in and can see no benefit from.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://670174]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having a coffee break in the Monastery: (7)
As of 2024-04-24 08:45 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found