Okay. First a few general clarifications about my original reply.
Ovid's phrasing of his question permitted my response which was meant to have been summarised by:
Whilst I appreciate the need for some trait-like mechanism, there are currently many competing visions for how that mechanism should operate at the semantic level; and few, if any practical implementations of any of them.Until the field of proposals thins out, through consolidation or lack of interest, and it is possible to compare practical implementations of those that remain side-by-side, and evaluate them from both the practicality of their semantics in use, and the impact they have on performance, I am not ready to expend effort trying to decide which of the visions is the 'Right One'.
That inevitably limits it to some mix of what I know, or rather what I think I know, about some aspects of MI languages I have used.
So that limits it to
I've more recently played with Smalltalk Express from Objectshare, which I think is pretty much the same as Smalltalk V above in terms of the technology employed internally. Of course, it runs much more quickly now, on a 2.4 GHz P4 machine, than it did back then on a 20MHz 386, or the 40 MHz P1 that were the current top of the range when I was using it professionally.
I've got a copy of Visual Age Smalltalk, which I have heard good things about. It is a trial edition on a CD I picked up at a trade show somewhere. Maybe I should dig it out and take a look one day soon.
I did little with this as I was only acting as a teaching assistant to someone taking an OU course that called for it. I did acquire my appreciation of Design by Contract from it though.
I don't recall ever having heard of Dylan; I didn't know what the acronym CLOS stood for, though I have seen it come up by reference a few times; despite 3 serious attempts I've never got to grips with LISP.
LISP is one of two languages that I think I ought to "know"; that I ought to "have used in earnest". Unfortunately, I can never get past the syntax--it is just to tedious for words. The other is Haskell, which despite my best effort, I have a similar problem with. I won't justify that any further, nor respond to critisism of it. Some languages just gell for me and other do not.
I have used Forth more extensively a long time ago, and I believe that, back then anyway, that both languages, (Forth and Lisp), featured similar threaded-interpreted-bytecode as the basis of their implementation. Again, things have moved on.
Which brings me to my last general point.
I said more on that How do I know what I 'know' is right? and I won't repeat it, but as a general form of disclaimer, unless every post here is going to be treated as a form of scientific paper, thoroughly researched over weeks and bringing in every available scrap of bleeding edge research, then it is inevitable that sometimes the information within the posts here will reflect "old knowledge".
As I have also said before, my reason for frequenting this place is to learn! If along the way I impart some useful knowledge to others that's a nice bonus, but my reasons are purely self-interest: I want to learn.
And one of the best ways I know of learning is to converse with other who are more knowledgeable than oneself. And that means exposing ones opinions, thoughts, assumptions and dogmas to the cold light of scrutiny by those more knowledgeable people.
Through a quirk of fate I arrived at the Monastery Gates and discovered a place with a rare mix of intelligent, thoughtful and experienced people with an above average tolerance for those less knowledgeable seeking enlightenment. Way above average. I stuck around.
The greatest pleasure in life, beyond the satisfaction of carnal desires, is the feeling that I have learnt something. So when someone with greater knowledge than myself comes along and educates me, I am "over the moon Brian!". Thanking you.
So now to address (some) of the points you raise in your responses. Anything I don't touch on, means I accept your correction, or greater knowledge.
I chose the phrase "dynamic languages" as a way of avoiding the debate about what constitutes a compiled versus interpreted language.
At various points in your posts you state or imply that your interpretation of a "dynamic language" includes, amongst others, C++. Standard ML and Haskell. That these have (some degree of) dynamism to them. This is where we differ.
As far as I am aware, none of these languages can
Perl5 can require or unfeeze or eval classes into existence.
Perl6 will, (if my memory holds good), create a new class if an instance of an existing class is modified by the runtime addition of a method, or application of a mix in. The existing class remains as was, but the modified instance becomes an instance of a new duplicated&modified class.
As above, both Perl 5 and Perl 6, and also (I believe) Python and Ruby can do this through direct or indirect means.
In Perl 5 terms, re-bless.
In essence, the defining characteristic of what I mean when I refer to "dynamic languages" is the ability to eval code into existence.
Unless things have moved on markedly from where they were when I was last current with the proposed state of Parrot, this gets even more complicated, with any one language being able to act on and modify the state of the inheritance trees of all the other supported languages.
It is, or at least was, proposed that Perl could use Python library code and vice versa. And even that each could eval code of the other language into existence at runtime.
As far as I am aware, none of the (what I would term true compiled languages), for which I will cite C++, Haskell, O'Caml can achieve this. Their dynamic aspects are constrained to compile-time creation of classes, with the possibility of classes being generated via meta-programming through the use of templates (and now traits) in C++; Template Haskell in Haskell; and Meta-O'Caml in O'Caml. I'm not sufficiently up on Standard ML, CLOS, LISP and others to comment.
The only runtime dynamism that these languages have, (again, as far as I am aware), comes in the form of runtime decision branching on the basis of introspection.
if classof( object) do_this(); else do_that(); end; if classof( object).has_method( X ) ....
though the syntax will vary, sometimes extremely.
The basic restraint is that the introspection is limited to essentially read-only state, and both branches of any decision that can be made as a result of introspection, have to be in-place at compile time for type checking, type inferencing, etc.
These are what I would call "static languages" as all possible branch points, and the code they invoke, is known, "statically" defined, by the time compilation is complete.
So, for the purposes of discussion, my definition of "dynamic languages" would include Perl, Python, Ruby, Lua, and Tcl and Smalltalk as I know it.
It would not include C, C++, ML, O'Caml, Haskell.
I cannot comment upon LISP, CLOS or DYLAN as I have no idea whether these would constitute static (true compiled) or dynamic (compiled to byte code and interpreted) languages according to my definition.
If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.
I was using this as a generic term, that most people might be familiar with. for the process that requires a language to go through an (at least) two stage process of lookup when a method invocation is encountered within a piece of code.
This is pretty much true for all OO languages, whether it is termed method invocation, message passing or anything else. However, where is differs between two distinct classes of language is the timing of that lookup.
I chose to use the term vtable for this, but you can call it symbol table lookup, or hash lookup or in the case of the Spineless-tagless G-machine, I think it is termed the "info-table" lookup (but don't quote me on it :).
The point is that there has to be a lookup done somewhere, and I chose to term that process using the term I am, (and perhaps many other people are), most familiar with.
So, what is really important about method resolution is when it is done, and how long it takes.
And compilation is done once. And done before the program is ever loaded or run.
In these languages, it matters not, (a jot), how long compilation takes, a good thing in Haskell's case for some complicated programs (like the Haskell compiler itself!). The user never sees, nor has to wait for, that compilation.
Some languages, like Haskell and O'Caml, make full use of this off-line time to extraordinary effect in producing immensely fast executables and provide extremely useful language features like lazy evaluation, lazy lists, etc. They do this (without expressing any expertise on these compilers), by analysing the begeebers out of the code to the point where, by the time you get to run the code, it has, (almost), been reduced to a series of lookup tables and a few branch points.
That's a way over simplification of the process, but the point is, by the time methods are actually invoked, they have been reduced to a single level lookup that only requires the code to actually be run the first time it is encountered and from that point on the value is just substituted for the call. All "class lookups", "method resolutions" and related processing is completed before the program is ever loaded.
Subroutine (method) caching (optimisation) is not only built into the compilation cycle, it is pretty nearly, if not completely, ubiquitous.
In the absence of the ability to load pre-compiled byte code, not only does whatever time is spent performing class and method resolution impact the user every time they run the code, the fact that it does imposes limitations on how much time can be spent optimising the results.
Even with the ability to load pre-compiled byte code, the imperative to allow that byte code to run anywhere, means that final conversion to machine code (JIT) must be delayed until load time. And, unless you are going to give up on modularisation, relocation fix ups also have to be done at this time.
And then, you can only fully reduce the lookups within in any given class heterarchy to a single level if the language accepts and enshrines the notion that all classes are closed at runtime!. To quote from the Dylan document:
... a monotonic linearization enables some compile-time method selection that would otherwise be impossible in the absence of a closed-world assumption.
Without that "closed-world assumption", you have to have at least one decision point at each method lookup:
if( classIsClosed ) { lookup the address of the code in this class, fix up the parameter +s and invoke it. } else { Perform a full search of the inheritance heterarchy to locate the +(appropriate) method provider (possibly with conflict arbitration). Fix up the parameters, (with possible further class and method res +olution cycles. Invoke the code. }
Not only does that non-closed branch of that take longer, it also requires that all the support structures and mechanisms be in-place, regardless of whether it is ever used in any given program.
That means at least a flag per class to indicate closed ness. It also means that one must retain a lookup table of some kind that maps class names to other lookup tables that map method names to methods, and retains the superclass hierarchy and search pattern from this point forward.
In a fully pre-compiled language, much of this information can be discarded at compile time.
So the question comes, is the "closed-world assumption" true for the languages we care about?
In the case of Perl 5, I would say that it is most definitely not.
Perl 6? I seem to recall that there was mention of the ability to mark a class as closed. Which as far as I can see, means that non-closed classes are possible. And with that conclusion comes the requirement for all the support structures and code to provide for that possibility.
I took Aristotle's response, (I think correctly), to mean that if Traits, (or one of the other mechanisms), solves the problem with MI, then surely performance is but a secondary consideration.
You ask, in your second post:
only if you are writing performance critical things ... if you are ... then why the f*** are you writing it in Perl ;)
I like Perl. I like the ethos, permissiveness, conciseness and productivity it gives me. Given the choice, I would use Perl in preference to any other language with which I am familiar. That's quite a long list, though it does have it's holes.
Love it's freedoms; know it's limitations.
One of the limitations of Perl (and other "dynamic languages" according to my definition), is performance.
Besides the existence of empirical evidence, there is more practical evidence of the performance limitations of Perl, from which I will draw one quote:
The primary advantages of mod_perl are power and speed.
The very existence of these solutions, is a strong indicator of a problem.
So, with respect, the need for speed goes way beyond the bleeding edge of "video games" development, or the esoterics of " Nuclear Missile Guidance systems".
A regular question that arises here at PM is "How do I keep the browser user informed, whilst I generate X in the background?".
Wouldn't it be nice if you could generate your charts or summarise your data, or search your in-memory DB quickly enough that you didn't need to keep the user appraised of the delay?
Yes, using Perl et al. is a conscious decision that we make, trading raw performance for programmer productivity.
Yes, we can throw hardware at the problem to circumvent the need to move to another language for data hungry and/or CPU hungry processes.
Yes, we can drop into XS, or Inline C, or PDL or Math::Pari to mitigate localised performance hits.
But wouldn't it be nice to avoid all of these expediencies?
I have never known the situation in 25 years--except for the occasional old video game being run on new processors where the timing loop ran so fast that they became unplayable--where a user has complained that their program "runs too fast". Everybody likes it when the programs they use run quickly.
Not at the expense of correctness; or usability; or "good design" or maintainability--though the significance of those last two depends very much who you are.
Most users do not care a fig for how hard it is to maintain the software they use, they are only interested that it does what it is meant to do, correctly, with as little effort on their behalf as possible, and as quickly possible. Their time is money, just as the programmers is.
Programmers may be unique in the effect that their decisions can have upon the daily lives of millions of people.
"This would run faster if I accessed the instance data directly, but it will be a whole lot easier for me, or one of my fellow programmers, to modify, should that need ever arise in the future, if I indirect the accesses through setters and getters."
And a few hundred thousand people around the world every day, wait a second or two longer every time they use the application or web site that class is a part of.
Programmers aren't the only ones who allow self-serving decisions to affect their customers; but they are one of the few groups who's decisions can quickly affect large numbers of people; and one of the very few groups that do it "just in case".
If the compiler and caching technology has reached sufficient state of advance that all the features mooted for Perl 6 can be accommodated and still allow for a sufficient level of performance when those features are not employed, then I am all for them.
But most of what I have read regarding the development of implementations of trait-like mechanisms, is being done with languages that are fully pre-compiled producing static executables with (mostly) static class heterarchies, and read-only introspection capabilities.
Parrot is meant to be going to produce distributable, pre-compiled binaries that will require only load-time relocation fix-ups and maybe that will be the saving grace.
Still, all the features mooted for inclusion in Perl 6, (assuming they are not deferred), that will impose not just runtime hits if they are used, but also the necessity to architect the entire language implementation in order to accommodate their use--think Perl 5 and threads and the performance hit compiling with MULTIPLICITY USE_ITHREADS in 5.8.x has compared to without, or to 5.6.2!--worries me.
This is almost an aside, but you did bring this up in both posts and in the first one, attributed the "proposal" to me.
This was in no way anything I was proposing.
I was alluding to proposals that I vaguely recalled from watching the Perl6/Parrot lists go by where there were keywords being mooted to provide for the programmer to specify the inheritance tree search ordering. I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.
I just spent an inordinate amount of time trying to relocate those list discussion and failed miserably (though I saw your name come up a lot in later, similar threads!).
I had just about given up when a search turned up this page of Apocalypse 12. And there, right under the first paragraph heading near the top of the page are the following keywords:
:canonical # canonical dispatch order :ascendant # most-derived first, like destruction order :descendant # least-derived first, like construction order :preorder # like Perl 5 dispatch :breadth # like multimethod dispatch and some that specify selection criteria: :super # only immediate parent classes :method(Str) # only classes containing method declaration :omit(Selector) # only classes that don't match selector :include(Selector) # only classes that match selector
Now maybe I was (am) mixed up about what use was proposed for these keywords, or maybe that proposal has been changed or dropped, but I was not imagining that there was something relating to this, and I was definitely not proposing it.
The upshot is, that I want Perl 6 to succeed--for purely selfish reasons.
I want to be able to program everything in Perl. Well, maybe not Nuclear Missiles or video games, but as far as possible everything else. I don't want to have to resort to it's equivalent of XS or Inline C. I don't want to have to make use of libraries like GD and PDL and Math::Pari to achieve a reasonable performance for CPU intensive work. I know that C or pretty much any fully pre-compiled language will be faster than Perl for these tasks, but so what? With the expenditure of sufficient effort, I could do everything I now do in Perl, in Assembler. And it would be faster. That is not the point. What I want is for most everything I routinely (and even occasionally) do by using other languages, even in part, to achieve reasonable performance, directly in Perl.
That's a lot of wants, and a high goal, but I believe that Perl-like VHLL, dynamic, semi-compiled languages are the most productive, and I want to benefit from that productivity for as much of what I do as I can. What's more, I believe, (I'm beginning to sound like a Baptist preacher :), that much if not all of what I would like, is achievable. I just fear that if too many nice-to-have features are added into the (core of) the language, the need to support them will have an overly detrimental affect on what can be achieved.
In the light of TimToady's post in this thread, it look as if, through your influence or otherwise, my fears are unfounded. He does have a habit of making the right calls in these matters, so I will shut up and wait and see.
Relating this all back to the beginning and Ovid's post. I understand the need for Traits or one of the near-aliases of that term, but I fear that without seeing a live implementation in a Perlish language, with all of the dynamism that entails, that it's provision within the core of Perl 6 will inevitably be another foot on the brake of it's potential performance.
In Perl 5 terms, I think that any code using an implementation would have to be very unconcerned about performance to warrant it's use.
It may be that you have hit upon the mechanism for performing this entirely at compile time so that no runtime penalty ensues from it, but on the basis of what I have read, including those of the links from Class::Trait that worked, and the Dylan reference in particular, the requirement for a closed-world assumption seems to me to be in conflict with both Perl 5 and Perl 6. Maybe that can be mitigated without penalty in all but those cases where the assumption does not hold true, but as they say, seeing is believing.
History shows that the first few cuts of any new mechanism/algorithm can always be improved upon performance wise. Whether sorting, or FFT or hidden line removal or ray tracing or prime validation, the algorithms just seem to get faster and faster with each new cut.
Performance is not the only criteria, nor even the first criteria, but it is a criteria against which a language can be, and will be measured. And when adding features to the core of a language that potentially affect all programs that will be written in that language, whether they use the feature or not, you had best be sure that you pick the right semantics and the best algorithm.
And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.
In reply to Re^5: Informal Poll: Traits (Long Post Warning!)
by BrowserUk
in thread Informal Poll: why aren't you using traits?
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |