in reply to Re^5: Informal Poll: Traits (Long Post Warning!)
in thread Informal Poll: why aren't you using traits?
Wow,.. nice post :)
I think that we have an impedance match on our interpretation of the phrase "dynamic language".
Yes, I agree, although not really that different. My criteria for a "dynamic" language, is more the ability to write agile code which can cope with dynamic requirements. This could include eval-ing code at runtime, but it also includes other language features such as polymorphism. For instance, I am currently reading about the Standard ML module system. SML is very often a rigorously staticically compiled language, but it's module system is built in such a way that it almost feels like a more dynamic language. This is because of Functors, which (if I understand them correctly) are essentially parametric modules whose parameters are specificed as module "signatures". When you call a functor, you then pass a "structure" that conforms to that "signature" and the functor then creates a new "structure" based on that (it is not all that different from C++ STL stuff actually). If you then combine the module system with ML's polymorphism, you can get a very high degree of dynamic behavior, while still being statically compiled.
My point, static code analysis does not have to limit the dynamism in a language.
I also want to quickly say that runtime introspection (read only, or read-write) is not (IMO) a criteria for dynamic languages. In fact in some languages, like SML, I think runtime introspection is just not needed. However, that said, I personally like runtime introspection in my OO :)
If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.
I won't claim to be an expert on LISP compilation, because I truely have no idea about this. I do know that the only language still in use today as old as LISP is FORTRAN. Both of these langauge have blazingly fast compilers available probably for the simple reason that 40+ years of improvement has gone into them.
As for how LISP is so dynamic, I think LISP macros have a lot to do with that. LISP has virtually no syntax (aside from all the parens), so whan you write LISP code, you are essentially writing an AST (abstract syntax tree). LISP macros are basically functions, executed at compile time, which take a partial AST as a parameter, and return another AST as a result. This goes far beyond the power of text substituion based macros. And of course, once all these macros are expanded in compile time, there are no runtime penalties.
To be totally honest, I have written very little LISP/Scheme in my life. Most of my knowledge comes from "reading" it, rather than "writing" it. But with languages like LISP, I think more of the (real-world applicable) value actually comes from the "groking" of the language, and not the "using" of it. In other words, it is much easier to find work writing Perl than it is writing LISP, but knowing LISP can make me a better Perl programmer.
With respect to my use of the term 'vtable'.<snip a buch of things related to static method lookup vs. dynamic method lookup>
Much of what you say is true, but I think it has more to do with the design and implementation of the languages, and less to do with the underlying concepts.
I beleive that static analysis can go a long way, and caching and memoization can take it even further, and whatevers left is probably so minimal I don't need to worry about it. The best results can be acheived by combining all the best practices into one.
Will this work? I have no idea, but it's fun to try :)
re: program efficiency vs. programmer efficiency
I work for a consultancy which writes intranet applications for other businesses (we are basically sub-contractors). While performance is important (we usually have guidelines we must fall within, and we load test to make sure), these applications are long living (between 2-7 years). It is critical to the success of our business, and in turn to the success of our client's business that these applications are maintainable and extendable. Our end-users may not be anything more than periferally aware of this, and therefore seem not to care about it. However, those same end-users like hearing "yes" to their enhancement requests too. So while those end-users may not associate this with my use of OO, or trade-offs I made for readability, or time I spent writing up unit tests, they certainly would "feel" it if I didn't do that.
My point is that, for some applications, and for some businesses, application performance is much lower on the list than things like correctness, flexibility and extendability.
Search patterns, breadth first/ depth first etc.
Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.
I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.
The name you are looking for is "Manhattan Distance". I am not that familiar with the algorithm myself, however, I surely (unknowingly) employed it many a times since I lived in NYC for a while :) Google can surely provide a better explaintation.
And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.
I am not 100% sure of this either, I like the sound of Traits/Roles, but you never know when something better might come along.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^7: Informal Poll: Traits (Long Post Warning!)
by TimToady (Parson) on Nov 20, 2005 at 18:34 UTC | |
by stvn (Monsignor) on Nov 21, 2005 at 01:11 UTC | |
by TimToady (Parson) on Nov 21, 2005 at 02:27 UTC | |
|
Re^7: Informal Poll: Traits (Long Post Warning!)
by hv (Prior) on Nov 21, 2005 at 12:50 UTC |