in reply to Re: Re: Re: Re: inheritance and object creation
in thread inheritance and object creation

Sure you can always solve a problem with single-inheritance, but sometimes it results in an even more complex and unmaintainable mess than the worst multiple-inheritance mess. I have been reading the work on Traits a lot lately (here is a link to most of the relevant papers). I would suggest reading the one regarding the refactoring of the Smalltalk class heirarchy. It is particularly relevant.

But understand that I agree with you too; multiple-inheritance, mix-ins, etc. are not to be used lightly. But in certain cases, base level frameworks being one, things like multiple inheritance can be extremely valuable. Just compare the Java standard lib to the Eiffel standard lib. Java goes out of it way alot to accomplish things that Eiffel does with a single multiple inheritance scenario.

For example, in Java, Comparable is an interface with no implementation. Not much re-use here, just a single method that can be relied upon to be there at runtime. The same named Eiffel class COMPARABLE is far more than that thanks to Eiffel's multiple inheritance and deffered classes (which is not unlike Traits/Roles in a way). You get all the standard comparison operators and a few other method on top of that (see the link for details). All you need to do is define an implementation of the "<" operator and the rest comes for free. You just can't do anything like that in a single inheritance world, even with Java's "solution" to multiple-inheritance needs (aka Interfaces).

The Traits/smalltalk collection refactoring paper details a savings of about 10% of the number of methods needed to implement the same functionality, just by using traits. They also managed to move about 8% of methods that were "too high" in the heirarchy (a common problem in large single inheritance class heirarchies). Sure those percentages don't sound like much, but we are talking about removing 68 methods from a set of 635. And moving "down" 55 methods to their proper level in the heirarchy, 15 of which had previously be explicity disabled (they threw a "shouldNotImplement" exception). This is based off a preliminary refactoring and not an exhaustive one too, so more savings may have been possible. The author claims that the heirarchy is simpler too (which I only partially agree with) and more conceptually sound (which I totally agree with).

Just because things like multiple-inheritance, mix-ins, traits, roles, etc. aren't useful in everyday programming, don't discount them fully. They have their place and sometimes you may find they actually reduce complexity rather than increase it. Not everything should/can be shoehorned into a single inheritance world, so its nice to have the other options available IMO.

-stvn
  • Comment on Re: Re: Re: Re: Re: inheritance and object creation

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: Re: inheritance and object creation
by tilly (Archbishop) on Feb 25, 2004 at 23:20 UTC
    Addressing your example, in Ruby, the mixin Comparable is equivalent to the Eiffel one. You define <=> and the rest comes for free. I do not discount the power of doing that. Nor am I discounting the fact that with traits you could reduce the amount of code and clean other code up.

    However I am saying that this power comes at a constant development cost in terms of figuring out what is happening, why, where and when. This is not something that is visible in the statistics that you quote. This is something that shows up gradually when someone is lost in code and wandering around saying, "Where did foobar() get defined again, and how do I get it again?" Or alternately when you are wandering around saying, "I have a foobar(), but it isn't what I expected it to be? Why not? And where is this coming from?"

    Also if you go up to the link I had on mixins, you'll find that my opinion is somewhat finely nuanced. I don't think that mixins (or traits, or roles, or...) are Evil Incarnate. I just think that they impose a cost for their benefit, and the cost is one that needs to be carefully understood before deciding to splurge on their usage.

    Programmers who know when to use them and (more importantly) when not to use them will find them nice and not very problematic. Programmers who don't know that (and most of those who clamour most loudly do not) get lots more rope. I guess that that fits with Perl's design philosophy (give em enough rope to hand themselves), but that doesn't mean that I shouldn't offer advice on how to avoid self-hanging.

      However I am saying that this power comes at a constant development cost in terms of figuring out what is happening, why, where and when. This is not something that is visible in the statistics that you quote. This is something that shows up gradually when someone is lost in code and wandering around saying, "Where did foobar() get defined again, and how do I get it again?" Or alternately when you are wandering around saying, "I have a foobar(), but it isn't what I expected it to be? Why not? And where is this coming from?"

      These are issues that arise in the single inheritance world as well, although becuase the model is more well known/understood, it is less of a problem. The triats papers stresses that development tools are very helpful parts of the traits refactoring process. This is an obvious need for any new model that is introduced to help the adoption of it. Also things like meta-information left over from the compiler are important as well as tools to view that information. A great example of this is the .NET CLR which makes excellent use compiler meta-information and allows access at runtime to make highly detailed reflection a simple an efficient thing (at least more efficient than Java's reflection). There was a time when people didn't think that there was a need for run-time type information in C++, but few programmer today would want to live without it.

      Also if you go up to the link I had on mixins, you'll find that my opinion is somewhat finely nuanced. I don't think that mixins (or traits, or roles, or...) are Evil Incarnate. I just think that they impose a cost for their benefit, and the cost is one that needs to be carefully understood before deciding to splurge on their usage.

      I didn't mean to imply that you did think they were evil incarnate. I think we both agree that they have their place, and that they should only be used by those who know when and most importantly when not to use them. But those lessons are sometimes very hard to learn and live with, so my feeling has always been that spreading the knowledge of when to use them (and when not to) is only helpful to the programming community at large.

      -stvn