in reply to How does one avoid tautologies in testing?

(With the risk of telling you what you already know.)

I think in Software Engineering literature articles can be found on the topic of “Tautology Testing”. I even encountered approaches based upon it: e.g. Tautology Based Development (TBD) or Tautology Test Driven Development (TTDD)?!

A tautology test asserts that the code does what the code does?! I have my doubts. I have difficulty appreciating tautology tests except for maybe some really specialized cases. For normal business applications it sounds to me like overtesting and therefore a waste of money. Money that might be spend on other SQA activities like static testing to gain confidence in the code. How to avoid it? Maybe Injection Testing? See here for some info. My personal approach is try to prevent it and when I do see it throw it out.

Although I do test the SW I write before I throw it over the proverbial wall (normally unit tests and an integration test, preferably in a representative test environment), I prefer/demand other people to test it as well. It won’t be first time that I keep reading over my own mistake and simply fail see it. IMO software testing should provide an objective and therefore independent view of the quality of the SW; the more other people test your code the better.

Testing is an engineering discipline in its own right. Years ago I hired an independent test consultant from a company specialized in quality. The model used was V2M2 This was a real eye-opener for me and I think it’s safe to say the project benefited a lot from it. I especially liked the (good) test coverage and traceability to the requirements. Whenever possible I follow this approach, i.e. outsource the testing as much as possible.

Cheers
Harry

  • Comment on Re: How does one avoid tautologies in testing?

Replies are listed 'Best First'.
Re^2: How does one avoid tautologies in testing?
by ELISHEVA (Prior) on Jul 16, 2009 at 12:37 UTC

    Yeah - I saw that TBD stuff. But others haven't and the purpose of posting a node on Perl Monks is to create a discussion from which we can all, not just the OP, learn.

    Under ideal circumstances I prefer to have different people do writing and testing - it also helps a lot in identifying documentation errors and fuzzy specs. Often the person who does the coding is so close to the problem that they are unaware of their implicit assumptions. But small teams don't always have that luxury. Given that much important software innovation comes from under-resourced start-ups, skunk teams within corporations, and open-source projects, I think it is important to develop testing philosophies that work for teams both large and small.

    As for tautology testing I have mixed feelings. As gwadej pointed out (and LanX echoed) there are other reasons for testing (regression, crash testing) and they are very important. If your software is going to have a life cycle with new features and patches then regression testing is reason enough to pay the cost of test development.

    As I ponder the discussion so far, I'm beginning to realize that many things that seem like tautologies are not actually tautologies. It all depends on what you are using the test for and why. As long as we are very clear on what the test can and cannot verify, the test may still be valuable.

    For example, as moritz discusses, testing that "true is, well, true" is still a valuable test if you are testing a compiler because there are an unbelievable number of ways to screw up a compiler. The test seems like a tautology only when we discount the amount of processing involved for a compiler to decide that True is true. gwadej makes a similar point when he discusses debugging things that "could not possibly be wrong".

    Or take an even more extreme example, also given by moritz: testing sub s. If the purpose of the test is to verify that something is correctly split then using split to test a sub that calls split is a very bad idea. However, the purpose of such a test may be something very different.

    Suppose you have API documentation that says that parameters should be delivered in a particular order and you want to verify that sub s indeed expects those parameters. A true Ttest (to borrow gwadej's term) would pass when even the code is wrong. Parameters in the wrong order will not pass if they are wrong! Hence, for purposes of parameter order testing, even moritz's sub s/split example isn't a tautology.

    In conclusion, I think at least three questions need to be answered to decide if Ttesting is overtesting:

    • Is the test valuable for reasons other than conformance? (i.e. regression)
    • Is the test a true tautology, and if not, what exactly is it testing that is not tautological?

    • Is the thing that is actually being tested important enough to the success of the system to justify the cost? If you are publishing an API, getting parameter order right is rather important. If all your users are going to check the code anyway before they use it, then maybe it is just a "nice to do".

    As I think about what as been said so far on this thread, I am honestly surprised at how many non-technical factors and trade-offs seem to be creeping into the decision process of deciding what and how to test. Your post does a good job of stressing that point. So far we have:

    • Non-use of tests/accuracy of tests
    • intended life cycle of software
    • business value of non-tautological portion of test.

    I wonder what others will appear.

    Best, beth

      As I think about what as been said so far on this thread, I am honestly surprised at how many non-technical factors and trade-offs seem to be creeping into the decision process of deciding what and how to test.

      Aren't most of the interesting (annoying, frustrating, etc.) problems in programming the non-technical ones?.

      Novice programmers can believe that the technical challenges are the only ones we consider. This is why they often apply technical solutions to business problems and are surprised when they don't work.

      As we gain more experience, it's necessary to question why we do what we do. This question makes us all think about why we test what we test and what benefits we derive from them.

      G. Wade