in reply to Re: Informal Poll: why aren't you using traits?
in thread Informal Poll: why aren't you using traits?
The way this post reads to me is that MI is expensive in dynamic languages, and traits are one of many attempts to address that – that if MI were cheap, we wouldn’t need traits.
But what about the complexity issue? Programming has always evolved in the direction of greater abstraction; the complexity of software systems we build today is orders of magnitude greater than that of the artifacts created by any other engineering discipline. (And unlike other disciplines, you can do repetitive work once only, and then factor it away – so the complexity keeps growing. This is why software will never be industrialised like other engineering disciplines (not that they are industrialised even remotely to the extent the software types always seem to think they are, and anyway, I digress).) Even the simplicity of throwaway scripts is deceptive: you have an OS beneath, and they run inside an interpreter which takes care of memory management and many other menial tasks; and neither the OS (to a large extent) nor the interpreter are written in assembly, so you also need a compiler. The amount of work that has gone into making Perl oneliners simple is quite imposing.
Anyway, I’m rambling. The point is that complexity management is by far the most important aspect of designing programming systems (ie. meta-programming), and to me it seems like your post does not go into this at all. You admit that MI becomes unworkable for the programmer in large hierarchies; I believe that’s a much more salient point than its performance.
Makeshifts last the longest.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Informal Poll: why aren't you using traits?
by BrowserUk (Patriarch) on Nov 19, 2005 at 08:27 UTC | |
The way this post reads to me is that MI is expensive in dynamic languages, and traits are one of many attempts to address that – that if MI were cheap, we wouldn’t need traits. Then I failed dismally in my attempt to convey what I was trying to say :( Yes, MI becomes rapidly unworkable in extended hierarchies. Yes, Traits and it's kin are an attempt to reduce that complexity. Yes, MI is expensive in dynamic langauges. But definately NO to if MI were cheap, we wouldn’t need traits.. I did say (twice) that I am convinced that the basic issue that they are trying to address needs tackling. What I hoped to point out is that there are many subtly different attempts at descriptions for solutions being proferred currently, but that they are concentrating on their differences--which are minutae relative to the problems of performance and footprint that they bring with them. In compiled languages, where most of the groundwork for Traits and the others is being done, the complexities of method resolution and search order are compile time only costs. By runtime, the method call has been resolved either to a vtable entry, or to the actual entrypoint address. Perl, (and all dynamic languages), already have a performance problem with method lookup. One of the major benefits that is hoped to come from the Parrot project is a reduction in the overheads associated with the mechanics of subroutine invokation--stack setup, scope maintenance, closure propagation, etc. If that effort succeeds, then it could reduce the costs of calling a sub to the point where static (compile time) MI would be tolorable from a performance point of view, though the inherent brain-melting problem of MI would persist. It would also make Traits (and similar solutions) a practical solution to that MI complexity--but only if the method resolution can be fully resolved at compile-time The fear I have is that Perl 6, and other dynamic languages that are trying to adopt trait-like behaviours, are also adding | [reply] [d/l] [select] |
by TimToady (Parson) on Nov 20, 2005 at 01:38 UTC | |
One principle he kinda glosses over is that we tend to build features into the signature/type system when it replaces code you'd have to write yourself, and probably do a poorer job at. Signature types are not there primarily so that your routine can wring its hands over the shoddy data it is receiving. You can use them for that, of course, but the signature types are there mostly so that the MMD dispatcher can decide whether to call your routine in the first place. That's just an extension of the idea that you shouldn't generally check the type of your invocant because the dispatcher wouldn't call you in the first place unless the type were consistent. By putting the type declaratively into the signature, we can give the information to the MMD dispatcher without committing to a type check where the computer can figure out that it would be redundant. And the whole MMD dispatch system is there to replace the nested switch statements or cascaded dispatches you'd have to do to solve the problem if you rolled your own solution. And then it would still be subtly wrong in those cases where you're tricked into imposing a hierarchy on invocants that should be treated equally. The whole Perl 5 overloading scheme is a case study in that sort of error... Likewise the rest of the signature binding power is provided to declaratively replace all the boilerplate procedural code that people have to write in Perl 5 to unpack @_. Even if the declarations just generate the same boilerplate code and we get no performance boost, we've at least saved the user from having to watch the faulty rivets pop out of their boilerplate. Not to mention having to stare at all that boilerplate in the first place... Anyway, those are some of the principles that have guided us. We may have certainly screwed them up in spots, and doubtless we'll find some bottlenecks in the design that we didn't anticipate because we're just too stupid. But as you may have noticed, we're trying really hard to design a language where we can compensate for our own stupidities as we figure them out over the long term. If there's anything that can't be fixed in the design, that's probably it. | [reply] |
by BrowserUk (Patriarch) on Nov 20, 2005 at 05:38 UTC | |
But as you may have noticed, we're trying really hard to design a language where we can compensate for our own stupidities as we figure them out over the long term. The only reponse I have is that my fears do not translate into anyones "stupidities"--'cepting maybe my own. My expressing my fears in public is as much about getting guys like you and stvn to help quell them, as it is about bringing them to anyone elses attention. I am unsure whether to read this response as an attempt by you to quell my fears, or ...? Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by stvn (Monsignor) on Nov 19, 2005 at 18:59 UTC | |
In compiled languages, where most of the groundwork for Traits and the others is being done, the complexities of method resolution and search order are compile time only costs. By runtime, the method call has been resolved either to a vtable entry, or to the actual entrypoint address. You are mistaken here on two points actually. First, Most of the work on traits is being done using Smalltalk, which is in fact a compiled language, but it is also a very dynamic language. IIRC it does not use any type of vtable or compile time method resolution, but treats all object message sends as dynamic operations. Second, all compiled OO languages do not perform method resolution at compile time, that is a (C++/Java)ism really, and does not apply universally. I also want to point out that Class::Trait does all of it's work at compile time. This means that there is no penalty for method lookup by using traits. In fact since the alternative to traits is usually some kind of MI, traits are actually faster since the methods of a trait are aliased in the symbol table of the consuming class and so no method lookup (past local symbol table lookup) needs to be performed. ... snipping a bunch of stuff about Parrot and method performance ... Well, again, Traits (the core concept, not just Class::Trait) do not have any method lookup penalty really. The whole idea is that you don't have another level of inheritance, so you don't have all the pain and suffering which goes along with it. I suggest you might read the other papers on traits, not just the canonical one Ovid linked too, they provide a much more detailed explaination of the topic.
What you have just described is essentially CLOS (Common LISP Object System), which is not slow at all. In fact, in some cases CLOS is comparable to C++ in speed, and I would not doubt that in many cases it is faster than Java (and lets not even talk about programmer productivity or compare LOC cause CLOS will win hands down). Java/C++ both suck rocks at this kind of stuff for one reason, and one reason alone. They were not designed to with this way. If you want these types of features in your Object system, you need to plan for them from the very start, otherwise you end up with..... well Java. It is also important to note that a dyanamic languages can be compiled, and that the concepts are not mutally exclusive. LISP has been proving this fact for over 40 years now. The code reads: To start with, macros are expanded at compile time, in pretty much all languages I know of. Sure it might be the second phase of a compile, but it is still before runtime. Next, an $obj should hold it's class information directly in it's instance. In Perl 5 it is attached to the reference with bless, other languages do it their own way, but in general, an "instance type" will have a direct relation to the class from whence it came. So my point is that while it might be a level of indirection, it is very slight, and certainly should not involve any serious amount of "lookup" to find it. As for the method name lookup, you are correct, but some kind of lookup like this happens for just about every sub call too (unless of course you inline all your subroutines, which would just plain silly). We suffer a namespace lookup penalty because it allows us to use namespaces which are essential to well structured programming and have been for about 20+ years now. Basically what I am getting at is, you should not add this to your list of "why OO is slow" since it is not really OO that brings this to the table, it is namespaces as a whole.
Whoops,.. you are assuming traits/roles are inherited again. They are not, they are flattened, they have no method lookup penalty. Also look up classX' search pattern (depth/breadth/etc.). Ouch! This is a bad bad bad idea, it would surely mean the end of all life as we know it ;) But seriously, take a look at C3, it (IMO) aleviates the need for this type of "feature". For each superthing, lookup the address of it's vtable and check if it can do the method. There you go with those vtable things again, thats just plain yucky talk. Seriously, method dispatching can be as simple as this:
Even with Traits/Roles, it really can be that simple (remember, they flatten, not inherit). Sure, you can add more "stuff" onto your object model which complicates the dispatching, but still the core of it doesn't need to be much more than what you see above. ... snip a bunch of other stuff ... This is a non-issue, let me explain why. To start with, in Pure OO (no traits/roles), there is no conflict arbitration, if you find it in the current class, that is it, done, end of story. If you add traits/roles it actually doesn't change anything since they are "flattened". By the time the dispatcher gets to them, they are just like any other methods in the class, so normal OO dispatch applies. Assuming that after all that, we isolated a resolution for the method, we now have to go through lookups for PRE() & POST() methods, and about half a dozen other SPECIAL subs that can be associated with a method or the class it is a part of. A good implementation of this would have combined the PRE, POST and SPECIAL subs together with the method already (probably at compile time) I know this is how CLOS works. The cost you speak of is really an implementation detail, and (if properly implemented) is directly proportional to the gain you get by using this feature. Always remember that nothing is free, but some things are well worth their price. And that lot, even with the inherent loops I've tried to indicate, is far from a complete picture of the processes involved if all the features muted for P6 come to fruition. All of that was just to find the method to invoke. A good number of these can and will be resolved at compile time by the type inferencer (remember, Perl 6 will be a compiled dynamic language, just as Perl 5 is today). And of course a properly written implementation means that you will only pay for the features you actually use, so things like type contstraints (subtyping) will not affect you unless you actually use it (and again will likely be something done at compile time anyway). Keep in mind that many of the features you descibe here, which you insist will slow things down, are features found in a number of functional languages, many of which are really not that slow (and compare to C speed in some cases). Compiler technology and Type checkcing has come a long way since the days of Turbo Pascal, and it is now possible to compile very high-level and dynamic code in say Standard ML or Haskell to very very tight native code. My point, it is not just hardware technology which is advancing. Yes, I agree that there is a complexity problem with MI that must be addressed, but I also see huge performance problems arising out of the solutions be proposed, which when combined with all the other performance sapping, runtime costs being added through the desire for even greater introspection and dynamism. Well, I think you are mistaken about these "performance problems" in many cases, but even so, if Traits makes for a cleaner, more maintanable class hierarchy, that is a "performance problem" I can live with. Remember, for many programs, your greatest bottleneck will be I/O (database, file, network, whatever). IMO, only if you are writing performance critical things like Video Games or Nuclear Missle Guidance systems do you really need to care about these "performance problems", and if that is what you are writing, then why the f*** are you writing it in Perl ;) Combined, these mean that the single biggest issue I have with the current crop of dynamic language implemetations, performance--which Perl is currently the best of the bunch--is going to get worse in the next generation, not better. To start with Perl is not the fastest, nor is it the most dynamic. If you want dynamic, lets talk LISP, which not only has what Perl has, but it has much of what Perl 6 will have and then some (it certainly has all the features you have descibed above). LISP is not slow, in fact it is very fast. Why? Well, because it is compiled correctly. If we continue to use old, and outdated compiler theory/technology, then all the cool new whiz-bang stuff we want to add onto our language will just slow it down. On the other hand, if we bring our compiler theory/technology up to date with out language design/theory, then it is likely we won't suffer those penalties. Remember, just because Java/C++/C#/etc. can't do it right, doesn't mean it can't be done.
-stvn
| [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Nov 20, 2005 at 04:51 UTC | |
Okay. First a few general clarifications about my original reply. | [reply] [d/l] [select] |
by Ovid (Cardinal) on Nov 20, 2005 at 20:17 UTC | |
by tilly (Archbishop) on Nov 20, 2005 at 18:15 UTC | |
by Aristotle (Chancellor) on Nov 20, 2005 at 18:51 UTC | |
| |
by stvn (Monsignor) on Nov 20, 2005 at 16:05 UTC | |
by TimToady (Parson) on Nov 20, 2005 at 18:34 UTC | |
| |
by hv (Prior) on Nov 21, 2005 at 12:50 UTC | |
by jeffguy (Sexton) on Nov 20, 2005 at 06:33 UTC | |