in reply to Re^4: Informal Poll: why aren't you using traits?
in thread Informal Poll: why aren't you using traits?

Okay. First a few general clarifications about my original reply.

So now to address (some) of the points you raise in your responses. Anything I don't touch on, means I accept your correction, or greater knowledge.

The upshot is, that I want Perl 6 to succeed--for purely selfish reasons.

I want to be able to program everything in Perl. Well, maybe not Nuclear Missiles or video games, but as far as possible everything else. I don't want to have to resort to it's equivalent of XS or Inline C. I don't want to have to make use of libraries like GD and PDL and Math::Pari to achieve a reasonable performance for CPU intensive work. I know that C or pretty much any fully pre-compiled language will be faster than Perl for these tasks, but so what? With the expenditure of sufficient effort, I could do everything I now do in Perl, in Assembler. And it would be faster. That is not the point. What I want is for most everything I routinely (and even occasionally) do by using other languages, even in part, to achieve reasonable performance, directly in Perl.

That's a lot of wants, and a high goal, but I believe that Perl-like VHLL, dynamic, semi-compiled languages are the most productive, and I want to benefit from that productivity for as much of what I do as I can. What's more, I believe, (I'm beginning to sound like a Baptist preacher :), that much if not all of what I would like, is achievable. I just fear that if too many nice-to-have features are added into the (core of) the language, the need to support them will have an overly detrimental affect on what can be achieved.

In the light of TimToady's post in this thread, it look as if, through your influence or otherwise, my fears are unfounded. He does have a habit of making the right calls in these matters, so I will shut up and wait and see.

Relating this all back to the beginning and Ovid's post. I understand the need for Traits or one of the near-aliases of that term, but I fear that without seeing a live implementation in a Perlish language, with all of the dynamism that entails, that it's provision within the core of Perl 6 will inevitably be another foot on the brake of it's potential performance.

In Perl 5 terms, I think that any code using an implementation would have to be very unconcerned about performance to warrant it's use.

It may be that you have hit upon the mechanism for performing this entirely at compile time so that no runtime penalty ensues from it, but on the basis of what I have read, including those of the links from Class::Trait that worked, and the Dylan reference in particular, the requirement for a closed-world assumption seems to me to be in conflict with both Perl 5 and Perl 6. Maybe that can be mitigated without penalty in all but those cases where the assumption does not hold true, but as they say, seeing is believing.

History shows that the first few cuts of any new mechanism/algorithm can always be improved upon performance wise. Whether sorting, or FFT or hidden line removal or ray tracing or prime validation, the algorithms just seem to get faster and faster with each new cut.

Performance is not the only criteria, nor even the first criteria, but it is a criteria against which a language can be, and will be measured. And when adding features to the core of a language that potentially affect all programs that will be written in that language, whether they use the feature or not, you had best be sure that you pick the right semantics and the best algorithm.

And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^6: Informal Poll: Traits (Long Post Warning!)
by Ovid (Cardinal) on Nov 20, 2005 at 20:17 UTC
    Wouldn't it be nice if you could generate your charts or summarise your data, or search your in-memory DB quickly enough that you didn't need to keep the user appraised of the delay?

    I note that you qualified that with "in-memory", so that's a way out, but I just want to point out that Perl is fast enough and computers are fast enough that whenever I have to let my user know of a delay it's almost always due to a complicated database query, heavy disk IO or some request to an external resource. These three things, if slow, will be slow regardless of the language. Yes, Perl is slower than most commonly used languages but much of that can be alleviated with profiling and proper algorithm design.

    And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.

    From my personal experience with MI, mixins (I faked 'em via Exporting), Java interfaces and traits, I'm fairly convinced that traits are the best semantically and the lest likely to degrade performance (there's a tiny compilation hit with traits, but it's neglible. 300 tests in my lastest version run in about 4 wallclock seconds). Of course, while my original suspision of the superiority of traits came from my reading about them, my current opinion stems from my having experience with traits. I keep hearing in this thread comments which sound dangerously close to "I won't use traits because I haven't used traits" or "I won't use traits because other's don't". I find the first argument to be stupid. We never learn anything that way. The second argument might have a bit of merit ... being afraid to lead the way is often a survival traits ... but that doesn't mean people can't try them on non-critical systems so they can find out for themselves whether or not they're worthwhile. Folks railing against a technology they've never used just strikes me as a bit odd.

    The major questions I wonder about are whether or not the implementation I am maintaining has any bugs and what features need to be added/tweaked. However, I'm also not saying Class::Trait is the best choice, either. There are several competitors out there which offer different interfaces and capabilities. Just because Class::Trait is the most feature-complete doesn't make it the best. Still, I'd hare for folks to let fear of the unknown keep them from tying out these technologies. They really have made my life simpler at work (and I fully realize that my knowledge of real-world use of traits is relatively new. It's better than most have, though :)

    Cheers,
    Ovid

    New address of my CGI Course.

Re^6: Informal Poll: Traits (Long Post Warning!)
by stvn (Monsignor) on Nov 20, 2005 at 16:05 UTC
    BrowserUK

    Wow,.. nice post :)

    I think that we have an impedance match on our interpretation of the phrase "dynamic language".

    Yes, I agree, although not really that different. My criteria for a "dynamic" language, is more the ability to write agile code which can cope with dynamic requirements. This could include eval-ing code at runtime, but it also includes other language features such as polymorphism. For instance, I am currently reading about the Standard ML module system. SML is very often a rigorously staticically compiled language, but it's module system is built in such a way that it almost feels like a more dynamic language. This is because of Functors, which (if I understand them correctly) are essentially parametric modules whose parameters are specificed as module "signatures". When you call a functor, you then pass a "structure" that conforms to that "signature" and the functor then creates a new "structure" based on that (it is not all that different from C++ STL stuff actually). If you then combine the module system with ML's polymorphism, you can get a very high degree of dynamic behavior, while still being statically compiled.

    My point, static code analysis does not have to limit the dynamism in a language.

    I also want to quickly say that runtime introspection (read only, or read-write) is not (IMO) a criteria for dynamic languages. In fact in some languages, like SML, I think runtime introspection is just not needed. However, that said, I personally like runtime introspection in my OO :)

    If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.

    I won't claim to be an expert on LISP compilation, because I truely have no idea about this. I do know that the only language still in use today as old as LISP is FORTRAN. Both of these langauge have blazingly fast compilers available probably for the simple reason that 40+ years of improvement has gone into them.

    As for how LISP is so dynamic, I think LISP macros have a lot to do with that. LISP has virtually no syntax (aside from all the parens), so whan you write LISP code, you are essentially writing an AST (abstract syntax tree). LISP macros are basically functions, executed at compile time, which take a partial AST as a parameter, and return another AST as a result. This goes far beyond the power of text substituion based macros. And of course, once all these macros are expanded in compile time, there are no runtime penalties.

    To be totally honest, I have written very little LISP/Scheme in my life. Most of my knowledge comes from "reading" it, rather than "writing" it. But with languages like LISP, I think more of the (real-world applicable) value actually comes from the "groking" of the language, and not the "using" of it. In other words, it is much easier to find work writing Perl than it is writing LISP, but knowing LISP can make me a better Perl programmer.

    With respect to my use of the term 'vtable'.

    <snip a buch of things related to static method lookup vs. dynamic method lookup>

    Much of what you say is true, but I think it has more to do with the design and implementation of the languages, and less to do with the underlying concepts.

    I beleive that static analysis can go a long way, and caching and memoization can take it even further, and whatevers left is probably so minimal I don't need to worry about it. The best results can be acheived by combining all the best practices into one.

    Will this work? I have no idea, but it's fun to try :)

    re: program efficiency vs. programmer efficiency

    I work for a consultancy which writes intranet applications for other businesses (we are basically sub-contractors). While performance is important (we usually have guidelines we must fall within, and we load test to make sure), these applications are long living (between 2-7 years). It is critical to the success of our business, and in turn to the success of our client's business that these applications are maintainable and extendable. Our end-users may not be anything more than periferally aware of this, and therefore seem not to care about it. However, those same end-users like hearing "yes" to their enhancement requests too. So while those end-users may not associate this with my use of OO, or trade-offs I made for readability, or time I spent writing up unit tests, they certainly would "feel" it if I didn't do that.

    My point is that, for some applications, and for some businesses, application performance is much lower on the list than things like correctness, flexibility and extendability.

    Search patterns, breadth first/ depth first etc.

    Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.

    I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.

    The name you are looking for is "Manhattan Distance". I am not that familiar with the algorithm myself, however, I surely (unknowingly) employed it many a times since I lived in NYC for a while :) Google can surely provide a better explaintation.

    And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.

    I am not 100% sure of this either, I like the sound of Traits/Roles, but you never know when something better might come along.

    -stvn
      Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.
      I'd just like to point out that the passage you're quoting has almost nothing to do with standard dispatch. It's just syntactic relief for alternate dispatchers, selected by the caller. That last bit is important. Once you commit to a particular dispatcher, you're stuck with it. If you use the ordinary dispatcher syntax, you get the ordinary dispatcher. There's no extra overhead there. In fact, you probably use the ordinary fast dispatcher to get to the alternate dispatcher, as if it were just an ordinary method call. The rest is just syntax.
        That last bit is important. Once you commit to a particular dispatcher, you're stuck with it.

        So if I understand correctly, if I chose for a particular method call to use breadth-first, instead of the canonical C3, then it will apply for that particular method call only.

        That is still insane, but not as bad as I originally thought :)

        -stvn

      I do know that the only language still in use today as old as LISP is FORTRAN.

      Just a data point: my brother recently started a new job where he is learning COBOL for the first time.

      He's a mainframe programmer of 25-30 years standing that spent most of his life in the airline bookings industry, but the jobs there have dwindled and mostly moved to the US; his new job is in the banking sector.

      COBOL is probably the highest-level language he's had the opportunity to use in anger.

      Hugo

Re^6: Informal Poll: Traits (Long Post Warning!)
by tilly (Archbishop) on Nov 20, 2005 at 18:15 UTC
    Just filling in a couple of details.

    First of all you'll be glad to know that LISP is fully dynamic by your meaning of the phrase. No, I don't know what performance tricks it uses. Secondly I can confirm that Ruby is dynamic in all particulars you discuss except the fact that you can't change the class of an object at runtime. I should note that Ruby also does not allow multiple inheritance.

      Can you (or anyone) think of any case where reblessing is useful, in the sense that solving the same problem another way would be awkward?

      Makeshifts last the longest.

        I have objects that are lazily evaluated using rebless. Basically the objects represent a pricing scheme. Until $obj->price() is called the objects are just a hash of the parameters required to run a query out of the DB to get the data and are of the class 'Pricer::Stub', when Price::Stub::price() is called the routine extracts the required data and converts itself into a real Pricer object. And then calls price() a second time on the new object.

        Code using the Pricer object never knows or cares which object type is involved, as the Pricer objects are manufactored by a factory object.

        I used this because the $obj->price() method is called very often, and thus I didnt want conditional logic in the price() method itself to handle this behaviour. Personally I think this is a very effective design pattern and I'm happy to use it.

        ---
        $world=~s/war/peace/g

        When you have a generic proxy object like Object::Realize::Later, which more or less implements lazyness for method calls, it is quite convenient to have the object change class after it has been realized. Otherwise, lots of (brain-dead, I admit) checks fail when they ask UNIVERSAL::isa($obj,'foo');. Of course one could circumvent this problem with multiple inheritance or by writing a specialized ::Proxy class for every class to be lazy ...

        Can you (or anyone) think of any case where reblessing is useful, in the sense that solving the same problem another way would be awkward?

        I've seen it used with a class hierarchy that was based around incrementally parsing a serialised data structure. As the data was parsed it as reblessed to more and more specific classes as more was known about the structure in question. Quite neat.

        I've also occasionally used reblessing to implement state transitions.

        I think the time I'm most likely to rebless an object is if I want to subclass an existing package to get some modified behaviour. Mostly in such cases I can simply inherit from the base class and Subclass->new will do the right thing, but in some cases the base class's new relies on the invoked class's name to create the right object.

        My work application has occasionally needed such tricks, since the underlying database abstraction uses the class name to find the object-to-database mapping information. However as of now, there is only one example of such reblessing in 50 KLOC (and that in a proof of concept utility that won't be updated), since most of the original needs for it were removed when the database abstraction was modified to call the invoked class's bless method. We do have 6 examples of classes that overload bless to do various interesting things, and most of those would originally have reblessed the objects instead.

        Hugo

Re^6: Informal Poll: Traits (Long Post Warning!)
by jeffguy (Sexton) on Nov 20, 2005 at 06:33 UTC
    BrowserUK,
    Off topic, I recommend (if you ever find time with all your studies) pursuing LISP, at least to the point where you see its power (the MANY uses of macros). I think you'd like it a lot. It is so completely customizeable, so you can make it whatever you want. It seems very perlish, but more-so. Except that in common lisp they often opted for long function names instead of short ones. But I've found that my ability to employ once-and-only-once to the extreme and to introduce new abstractions everywhere to simplify and shrink my code -- I've found all that makes up for the annoyingly long names. And it's fast.

    Anyway, here's a good (free) book that sprints through the language so it can get to all the coolnesses of macros. Some of the concepts are very different from what I'm used to, so it took FOREVER to get through some of those middle chapters. But the material I learned from the effort was worth it.
    http://paulgraham.com/onlisp.html

    Thanks for all the enlightening posts. They've been a joy to read (if a bit long ;-)