in reply to Re: automated testing
in thread automated testing

When a program always generates a report (including smooth, uneventful execution) you run the risk of missing errors simply by glossing over the report. I know it's stupid, but it's human. Programs should only report on failure, if it can be at all helped. If you define failure as including the case of not running at all, there needs to be some other mechanism in place for checking whether or not the program ran and generating a notification if not.

Replies are listed 'Best First'.
Re^3: automated testing
by Anonymous Monk on Oct 29, 2010 at 17:04 UTC
Re^3: automated testing
by JavaFan (Canon) on Oct 29, 2010 at 16:47 UTC
    When a program always generates a report (including smooth, uneventful execution) you run the risk of missing errors simply by glossing over the report.
    That's why you put the important fact (pass/fail) in the Subject, or first line of the report.

    Having to gloss over a report to find out whether it's about a failure or a success is wrong.

    Of course, if you get a gazillion reports a day, you can automate that as well: just have an addition program reporting to you which reports were failures, successes or passes. Just make sure *that* report gets send every day.

      But...you don't care what passed; it is, in turn, wrong to get a gazillion reports per day. Handling passed cases (i.e., parsing the subject line for PASS or FAIL, etc.) is just additional overhead. I'm not saying don't LOG, I'm just saying don't NOTIFY on success.
      argh I forgot to log in. The reply from anonymous monk is from me, Urthas. Apologies.