in reply to Re: Inheriting Tests and Other Test Design Issues
in thread Inheriting Tests and Other Test Design Issues
The example you cited of changing functionality (123,456 => 12,3456) is a re-specification of the design or contract and probably shouldn't be done, or at very least, would require a very clear notation in the upgrade documentation.
Yes, but shouldn't you have tests checking that your superclass didn't change its specification? Shouldn't that be very important tests?
It would also logically lead to a maze of dark twisty tests, where every module would start testing that substr counted from 0 not 1 and that * really knew how to multipy two numbers.
No, because your module isn't providing substr or multiplication functionality. It isn't subclassing the functionality of the Perl runtime environment. But if you have a class that's subclassing a class that provides number formatting, then your class is providing number formatting. That that happens through inheritance isn't something the users need to know (encapsulating of the implementation).
It's the same if you buy a Ford. You'd expect that Ford checks that the tires stay on the wheels when going 100km/h, and don't assume that they delegate that to Goodyear (or whatever brand they use). Goodyear does its test, but Ford should do as well.
Abigail
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Inheriting Tests and Other Test Design Issues
by BrowserUk (Patriarch) on Sep 28, 2003 at 01:54 UTC | |
Yes, but shouldn't you have tests checking that your superclass didn't change its specification? Shouldn't that be very important tests? Yes. You should have a test suite for validating the functionality of your superclass, but not as a part of the unit tests of a subclass! If you were producing half-a-dozen subclasses of a superclass does it make sense to test the superclass functionality in the unit tests of all of them? What if it's a dozen, or two dozen? The owners/ authors/ suppliers of the superclass should maintain and use a Functional Verification suite for that class. You, as the user/ purchaser of the class should have an acceptance test suite that verifies the functionality. If logic and good relations prevail, then these may be the same test suite -- but that's neither essential nor always possible -- but this type of testing should only be run when the superclass is upgraded, not when testing a change to a subclass of it. Tight coupling between unit tests and the unit is essential, but equally essential is that units (modules/ classes) and be loosely coupled, that includes their testing. A unit test failure should directly indicate a failure in that unit, and not upstream from it. Upstream failures should have been detected before the upstream code is accepted. No, because your module isn't providing substr or multiplication functionality. It isn't subclassing the functionality of the Perl runtime environment. Nor is my module implementing the functionality of the superclass -- its just using it, just as it uses the perl runtime, c-runtime, OS systems calls -- or passing it through. Unit tests of my units functionality, should automatically show up any failures in the superclass, where they affect that functionality. Any tests aimed solely at verifying the functionality of the superclass are either This should not be necessary, and is undesirable, as all it does is increase the development/maintainance cycle and ultimately increases costs. Duplicated testing doesn't improve anything. If you don't have faith in the testing of the superclass, put the effort in to improving it, not duplicating it. Test thoroughly, but only test once! Or rather, in one place. I've got some great (but long) horror stories of corperating testing methods, and how more isn't better if it's more tests of the same thing. But if you have a class that's subclassing a class that provides number formatting, then your class is providing number formatting. If the number formatting is used internally by my module, then my unit testing should show up any disparities between the specified and actual returns I get from it without resorting to tests specifically aimed at testing the superclass. If my module is passing the superclass functionality through without overriding it, then any testing I did of that would either be testing the inheritance mechanisms -- which would be like testing substr -- or it would be duplicating testing that is (should) already being performed by the superclass unit or my Acceptance testing of the superclass. That that happens through inheritance isn't something the users need to know (encapsulating of the implementation). Agreed, but that doesn't enter into the argument. The users of my module, (and by proxy the superclass) should have their own FV or Acceptance tests for my module. If we can agree to share that between us, so well and good, but if the superclass has a bug or failure to meet spec. that isn't detected by my unit testing or my users acceptance testing of my module, then it is irrelevant or those tests are flawed and should be improved. It's not a case that anything should go untested, it's just that there is no benefit in testing stuff twice. Placing tests in the right place not only minimises the amount of testing done, and the costs involved in doing it, it also means that testing times are shorter, which encourages them to be used more frequently at each particular level, which improves overall throughput and quality. It's the same if you buy a Ford. You'd expect that Ford checks that the tires stay on the wheels when going 100km/h, and don't assume that they delegate that to Goodyear (or whatever brand they use). Goodyear does its test, but Ford should do as well. Nice (or perhaps, pertinent) analogy :) Goodyear should test the construction of their tyres 1 and ensure they live up to their specified rating SR/VR/HR etc. 2 The wheel manufacturer (Ford or 3rd party) should ensure that their wheels correctly retain standard tyres -- not just the particular tyre chosen as standard equipment on one particular model that uses that wheel -- on the rim under all 'normal' circumstances.3 Ford should During this latter testing, it should not be necessary for Ford to perform lamination tests on the plys, or durability tests on the radial reinforcing, or test the compound for longevity or wet whether grip etc. This should all have been covered by the unit testing and be certified by the manufactures rating. In software terms, the tests described above fit roughly into these categories. 1Unit testing. Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller If I understand your problem, I can solve it! Of course, the same can be said for you. | [reply] |
by blssu (Pilgrim) on Sep 29, 2003 at 19:46 UTC | |
I think the original question confuses tests with contracts. A 3-digit format requirement of the user -- it doesn't matter if it's a subclass or not -- must be specified in a contract (types, guards, pre-conditions, etc.). If the requirement is violated by switching to a 4-digit format this is not a test failure, but a contract failure. If the test suite were copied into the subclass, the subclass would incorrectly report a test failure. Someone may "fix" the superclass format instead of fixing the interface mismatch. Ideally, I'd want the subclass to inherit all the superclass tests. That way I can run the superclass tests on an instance of the subclass. If there are any failures, that might mean the inheritance is incorrect. (Is a circle an elipse?) It might also mean the superclass tests are not polymorphic. Either way there's a bug to fix. The nasty dilemma is that a system can be correct and unsafe at the same time. On the subject of tire testing... Vehicle manufacturer tire testing is actually a very poor analogy. Those tests are more like security tests -- if all of your design assumptions are violated, does the system fail gracefully? Vehicle crash testing is an extreme example. The software analogy would be to introduce standardized hardware failures and then verify the failures do not cause data loss. Specification tests (your levels 1 through 5) are really just purchasing formalities. Did we get what we paid for? Programmers worry almost exclusively about this. Is my superclass ripping me off? The most common tests that manufacturers run are design verification and performance measurement. Unlike most software, mechanical objects are quite unpredictable. Manufacturers run tests to see if what they think should happen really does happen. Programmers never do this kind of testing. Gee, I wonder if 1+1 still works if I put it before a while statement? Lastly, manufacturing a verified design is not easy either -- tests are required there too. These are similar to specification tests and mostly interesting to accountants. 50% failure of a dirt cheap process might be better than 1% failure of an expensive process. | [reply] |
by BrowserUk (Patriarch) on Sep 29, 2003 at 21:24 UTC | |
Ideally, I'd want the subclass to inherit all the superclass tests. This is a source of confusion, in my mind at least, though dws touched on it earlier also. There are two "chains of inheritance", for want of a better term, involved in this discussion. We have the superclass & subclass chain, and we have the superclass tests and the possiblity of the tests for the subclass 'inheriting' those within the superclass. My view, based on my accumulated wisdom -- I use the term loosely -- from exposure to the various different methods I've used, is that using inheritance in the test chain, regardless of whether this is formal, language based inheritance, or cruder mechanisms by which the tests for the superclass would be run as a part of the testing cycle of the subclass, is bad practice. My reasoning is, as I cited above On the tyres thing: Most analogies don't stand up to deep scrutiny. I'll hold my hand-up here and say that I got a long way into arguing with your crash-test scenario -- or rather the purpose of crash testing vehicles -- before throwing it away and "moving on" :) abigail used it to make the point that sub-systems aren't islands and there is intereactions between them. I used to make the point that he was correct, but that the different testing is required at different stages. By their very nature, analogies tend to be over-simplified, but discussing the true nature of the system used in the anaology is fruitless. If the analogy helps in making the point being broached, it served it's purpose. If it didn't, move on. Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller If I understand your problem, I can solve it! Of course, the same can be said for you. | [reply] |
by blssu (Pilgrim) on Sep 30, 2003 at 13:58 UTC | |
by BrowserUk (Patriarch) on Sep 30, 2003 at 14:08 UTC | |