in reply to Test output: Interpret or pass/fail

The other one of us believes that it makes sense for the output of some tests to require a bit of eyeballing, that tests should be run less frequently and that the effort of making everything a boolean pass/fail isn't worth it.

When you make all of your tests pass/fail, it's easy to write a script that summarizes the results. Very few people are going to scan several hundred (or thousand) lines of test output anyway. I don't. I just want to know what (if anything) failed. xUnit's "greebar" is a nifty binary summarizer. You either get the green bar (yeah!) or you don't (boo!). Try to summarize ad hoc tests gets messy. You're essentially trying to turn it back into a pass/fail test, so why not do that in the first place?

Even if you have a relatively small amount of test output, ad hoc results make you work for it, which runs counter to the virtue of Laziness. Let the computer do that.

As for running tests less frequently... that depends on what they mean by less frequently. I run tests several times per hour during normal development, though I usually factor tests so that slow ones (e.g., ones that go against a database) are in a separate script that gets run less often. I find that 20-30 seconds is the upper limit of where I start to get impatient and hesitate to run tests.

  • Comment on Re: Test output: Interpret or pass/fail