[X] Other: I've never been one for buzzword compliance
Before anyone gets too upset about that, I'll explain that so far, "Traits" are nothing more than a buzzword to me.
Unlike a lot of the other responders so far, I have read the traits paper--several times in fact--and I am convinced that the basic issue that they are trying to address needs tackling.
I also think that "roles", "mix-ins", "behaviours", the "decorator pattern", "type classes" and several other similar concepts under different names, depending upon where and what language they are associated with and described in terms of, are all attempts to address the same problem.
The problem being addressed is that single inheritance forces copy&paste code re-use upon the programmer because you cannot (easily) abstract common behaviour from two classes and place it in a separate module from which both can inherit that common code.
Yes, you can do this once, but if 3 classes share common behaviours, but not all of them are common to all three modules, then you have a partitioning problem. If you lump all the common behaviours into a single module and inherit it by all three modules, then some are going to acquire behaviours that they should not have. If you segregate the behaviours into non-overlapping modules, you cannot inherit the two (or more) modules each of the inheriting classes needs. And if the inheriting modules are derived from other modules, you're completely stymied.
You can achieve the requirement using multiple inheritance, but this makes for a zillion tiny classes that get inherited at many different levels throughout the inheritance hierarchy. This has a number of bad side effects:
Eg. Documentation gets deeply nested and cross-linked making it extremely difficult to both follow and produce. Trying to work out exactly what interfaces any given class will respond to becomes a nightmare.
Should the inheritance tree be search breadth-first, or depth-first, or depth-within-breadth, or breadth-within-depth, or depth-within-breadth-within-depth, etc.
In addition, there is the problem that if the search pattern is wholly pre-determined, then what ever that pattern is will never be right for all situations with the result that the class writer will end up having to try and circumvent the default search pattern to achieve his goals. When that starts to happen, everyone--from the class writer, to the class user, to the maintainer, to the documenter--has problems.
Alternatively, the language/API designer can punt the search pattern decisions off to the class writer by providing APIs/keywords/options that allow each class to specify the search order from it's position within the class hierarchy on up (down?). The most flexible option of all, but also the one guaranteed to cause most headaches for most people. No-one will be able to predict--or document--the search pattern beyond any one class, or any one level, because the next level up (down/through) may change it. Unpredictable and therefore impossible to document.
And even when you have sorted out the desired search pattern and correctly implemented it--whether in the compiler or interpreter or the classes themselves--there is still the problem that what the code can do correctly time after time, the programmer will often have extreme difficulty in understanding, never mind remembering.
The cost of maintaining the data required for runtime introspection of a wide and deep MI tree are daunting enough.
If runtime modification of the tree is allowed by the language, then the costs of allowing dynamic modifications to vtables and wholesale duplication of them when new classes are derived from existing ones at runtime, creates the need for a vast runtime database of hierarchical introspection data.
If additions to, or modifications of, the vtables are allowed, on an instance by instance basis, these costs get even larger.
Even with the addition of method caching at multiple levels, the effect of each modification will result in cascading cache invalidation.
So far, the various mechanisms I've read about for tackling this underlying problem of OO hierarchies seem to concentrate on the differences in details from the other solutions being proposed.
What I have yet to see is any research into how to deal with the runtime penalties--memory usage and performance--that are common to all of the proposals.
The complex hierarchies that result from MI can be reduced at compile time to highly efficient--space and time--vtable structures that effectively eliminates the runtime overhead completely. You trade slow compile times for fast runtime. The programmer may have problems getting his brain around the complexity, but the compiler doesn't.
Once you move MI, or any of the proposed solutions for reducing or eliminating MI, into a dynamic language, the need to support introspection and dynamic updates to both the hierarchy itself and the vtables that encapsulate it, creates a huge problem for implementing efficient data structures and algorithms to support them.
From the looking around the web I've done, there seems to be little or no information on ways of implementing this efficiently; nor even any visible ongoing research into it.
In 10 years time, when we are all using 64-bit processors with effectively unlimited memory (barring cost) and processors are doubled in performance 4 more times, the need for efficient implementation will have disappeared; classes will be downloaded from the web, or will run as services on web; the JofACM will have carried many papers into the problem of efficient implementation; and there will be half a dozen or more live implementations upon which to base judgements of the merits and downsides.
Until then, arguing about the details--whether "mix-ins" are the same as "traits", or subtly different--all seems way premature.
So to me, "traits", along with all the rest, are rather ill-defined buzzwords all groping toward a solution to a problem that does need solving. However, as yet, they are presenting opinions about the requirements of the solution and not actual solutions that I can download and use. When the various sets of opinions start to coalesce into a one or two well-defined implementations, that will be the time to go looking for the one that most closely matches my own thoughts.
Or maybe I should just implement my own ideas--I'll call them "Ingredients". That's catchy. My catch phrase will be "Bake your own classes quickly and easily with pre-prepared Ingredients!" :)
Your question is a little like asking why I haven't yet bought a Blu-Ray drive. Whilst the two sides in the Blu-Ray -v- HD DVD continue to argue with each other (and amongst themselves) over the details of the next generation of DVD formats, I've only just got around to getting a writable DVD player for my computer.
If I had bought into (and bought) every new type of DVD reader, writer, or re-writer that has come to the market place over the last 5 or so years, I would now have a dozen or more and have paid premium prices for each as they became available, to the a tune of several thousand pounds I would guess.
As it is, I just bought one drive for £56, that reads, writes and re-writes 3 different forms of DVD and 2 (or more) types of CD. Worth waiting for.
In reply to Re: Informal Poll: why aren't you using traits?
by BrowserUk
in thread Informal Poll: why aren't you using traits?
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |