in reply to Re^5: Informal Poll: Traits (Long Post Warning!)
in thread Informal Poll: why aren't you using traits?

BrowserUK

Wow,.. nice post :)

I think that we have an impedance match on our interpretation of the phrase "dynamic language".

Yes, I agree, although not really that different. My criteria for a "dynamic" language, is more the ability to write agile code which can cope with dynamic requirements. This could include eval-ing code at runtime, but it also includes other language features such as polymorphism. For instance, I am currently reading about the Standard ML module system. SML is very often a rigorously staticically compiled language, but it's module system is built in such a way that it almost feels like a more dynamic language. This is because of Functors, which (if I understand them correctly) are essentially parametric modules whose parameters are specificed as module "signatures". When you call a functor, you then pass a "structure" that conforms to that "signature" and the functor then creates a new "structure" based on that (it is not all that different from C++ STL stuff actually). If you then combine the module system with ML's polymorphism, you can get a very high degree of dynamic behavior, while still being statically compiled.

My point, static code analysis does not have to limit the dynamism in a language.

I also want to quickly say that runtime introspection (read only, or read-write) is not (IMO) a criteria for dynamic languages. In fact in some languages, like SML, I think runtime introspection is just not needed. However, that said, I personally like runtime introspection in my OO :)

If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.

I won't claim to be an expert on LISP compilation, because I truely have no idea about this. I do know that the only language still in use today as old as LISP is FORTRAN. Both of these langauge have blazingly fast compilers available probably for the simple reason that 40+ years of improvement has gone into them.

As for how LISP is so dynamic, I think LISP macros have a lot to do with that. LISP has virtually no syntax (aside from all the parens), so whan you write LISP code, you are essentially writing an AST (abstract syntax tree). LISP macros are basically functions, executed at compile time, which take a partial AST as a parameter, and return another AST as a result. This goes far beyond the power of text substituion based macros. And of course, once all these macros are expanded in compile time, there are no runtime penalties.

To be totally honest, I have written very little LISP/Scheme in my life. Most of my knowledge comes from "reading" it, rather than "writing" it. But with languages like LISP, I think more of the (real-world applicable) value actually comes from the "groking" of the language, and not the "using" of it. In other words, it is much easier to find work writing Perl than it is writing LISP, but knowing LISP can make me a better Perl programmer.

With respect to my use of the term 'vtable'.

<snip a buch of things related to static method lookup vs. dynamic method lookup>

Much of what you say is true, but I think it has more to do with the design and implementation of the languages, and less to do with the underlying concepts.

I beleive that static analysis can go a long way, and caching and memoization can take it even further, and whatevers left is probably so minimal I don't need to worry about it. The best results can be acheived by combining all the best practices into one.

Will this work? I have no idea, but it's fun to try :)

re: program efficiency vs. programmer efficiency

I work for a consultancy which writes intranet applications for other businesses (we are basically sub-contractors). While performance is important (we usually have guidelines we must fall within, and we load test to make sure), these applications are long living (between 2-7 years). It is critical to the success of our business, and in turn to the success of our client's business that these applications are maintainable and extendable. Our end-users may not be anything more than periferally aware of this, and therefore seem not to care about it. However, those same end-users like hearing "yes" to their enhancement requests too. So while those end-users may not associate this with my use of OO, or trade-offs I made for readability, or time I spent writing up unit tests, they certainly would "feel" it if I didn't do that.

My point is that, for some applications, and for some businesses, application performance is much lower on the list than things like correctness, flexibility and extendability.

Search patterns, breadth first/ depth first etc.

Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.

I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.

The name you are looking for is "Manhattan Distance". I am not that familiar with the algorithm myself, however, I surely (unknowingly) employed it many a times since I lived in NYC for a while :) Google can surely provide a better explaintation.

And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.

I am not 100% sure of this either, I like the sound of Traits/Roles, but you never know when something better might come along.

-stvn
  • Comment on Re^6: Informal Poll: Traits (Long Post Warning!)

Replies are listed 'Best First'.
Re^7: Informal Poll: Traits (Long Post Warning!)
by TimToady (Parson) on Nov 20, 2005 at 18:34 UTC
    Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.
    I'd just like to point out that the passage you're quoting has almost nothing to do with standard dispatch. It's just syntactic relief for alternate dispatchers, selected by the caller. That last bit is important. Once you commit to a particular dispatcher, you're stuck with it. If you use the ordinary dispatcher syntax, you get the ordinary dispatcher. There's no extra overhead there. In fact, you probably use the ordinary fast dispatcher to get to the alternate dispatcher, as if it were just an ordinary method call. The rest is just syntax.
      That last bit is important. Once you commit to a particular dispatcher, you're stuck with it.

      So if I understand correctly, if I chose for a particular method call to use breadth-first, instead of the canonical C3, then it will apply for that particular method call only.

      That is still insane, but not as bad as I originally thought :)

      -stvn
        We are trying to unify MRO generators regardless of the call-one vs call-all semantics of the eventual dispatch. And we know that the desired order for construction and destruction are opposite to each other, so they can't be exactly the same dispatcher, though they might use the same MRO in opposite orders. We're just trying to generalize, and in particular keep the done-vs-try-again distinction orthogonal to MRO determination. And as long as the syntax allows us to plug in a different one without an additional level of indirection, we've made no commitments to which ones run fast and which ones run slow.

        In other words, yes, it's insane. Crazy like a fox, if you will... :-)

Re^7: Informal Poll: Traits (Long Post Warning!)
by hv (Prior) on Nov 21, 2005 at 12:50 UTC

    I do know that the only language still in use today as old as LISP is FORTRAN.

    Just a data point: my brother recently started a new job where he is learning COBOL for the first time.

    He's a mainframe programmer of 25-30 years standing that spent most of his life in the airline bookings industry, but the jobs there have dwindled and mostly moved to the US; his new job is in the banking sector.

    COBOL is probably the highest-level language he's had the opportunity to use in anger.

    Hugo