Re: Perl: friend or foe ?
by MonkPaul (Friar) on Apr 24, 2005 at 13:42 UTC
|
Hi,
That was my point, should perl be written so that it is pre-compiled from the very first time. Even if it is 'pre-compiled each time', surely thats a pointless task and wastes some resource doing so.
Im not knockin perl here (or trying not to) but im just trying to understand some reasoning behind its implementation. Can the language be changed to accomodate this? Can it be make richer and more powerful than some of the other languages, whilst remaining 'human friendly'
Keep the points of view coming. | [reply] |
|
|
You mean: should Perl somehow secretly cache the startup steps, so that it performs those only once? Sure, but then we run into problems specifying exactly what to cache and where to cache that translation step.
And hence, what then happens (in other programming languages) is that we make the users specify exactly when and how to cache the translation from the programming language to the intermediate language. And that would make Perl distinctly less "human friendly" for me. I'd hate having to develop a "makefile" for every Perl program I write. Those systems typically also separate the "compilation" environment from the "execution" enviroment, losing some of the meta information (like the names of all methods in this class), and losing the ability to "compile" while "executing" for some neat tricks.
No, I like the current system. When I don't need caching of the translation (which is what your "compilation" seeks to do), I can use Perl by simply saying "go". When I've decided that caching helps, I can do that explicitly using the mechanisms I gave earlier.
To force caching on all users is a premature optimization, which is an evil step.
For example, I'm developing a few applications for clients right now in CGI::Prototype. The eventual application will likely be executed in a mod_perl environment, but I'm running it as CGI because I don't want any caching to interfere with my clean-slate testing, especially as I tweak various parts that will eventually be cached. The fact that Perl lets me do this makes my development time shorter, not longer. (And in fact, some parts of CGI::Prototype are possible only because I can blur the lines between "compilation" and "execution", so I have a richer framework to do my work, even if I use those features only indirectly.)
| [reply] |
|
|
I'm worried that if I mention Python one more time this weekend I might get dragged off by a lynch mob, but here goes anyway... When you import a module in Python, say module Foo.py, it writes to disk the byte code of the compilation in a file called Foo.pyc. The next time you run, it goes to do the same thing, but checks the time stamps on the .py file and .pyc file, only recompiling the .pyc file if it is stale, and otherwise just reading it in and ignoring the .py file. There are no Makefiles involved.
What is wrong with this idiom? I guess one tricky issue is where you put the .pyc files. If you don't have permission to the directory in which the .py files are stored, and the .pyc files aren't already there, then I suppose you have to figure out where to put the .pyc files you create, though I don't think this would be too terrible. You could have a python cache directory in your home directory, or something like that.
Why wouldn't such an idiom work for Perl?
| [reply] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To force caching on all users is a premature optimization, which is an evil step.
I disagree. First, I disagree that just because perl could cache the compiled bytecode that it must do so for everyone. There could be a commandline arg, or maybe a pragma ("use cachedbytecode;") which would enable it. Of course, if there is no downside to the caching (it's smart enough to deal with non-writable filesystems in a reasonable, unobtrusive manner, i.e., disable bytecode caching automatically), then I see no reason not to enable it by default.
Second, I disagree that it's premature optimisation. It's optimisation, yes. But premature? We already know the overhead of compiling code each time. We can compare that to the caching speed. If the cache is slower than recompiling, then we throw away that code, don't commit it to the main trunk, and document it. If the cache is faster than recompiling each time, especially in small programs (where compilation is a larger percentage of the runtime), then it's a proven optimisation.
Finally, I mostly disagree with the OP that this is really even needed. I laugh at my cow-orkers that work in Java. By the time that their code has finished recompiling, my code is almost done executing. And we have similar numbers of lines of code to work with. They won't even have their code loaded into memory by the time mine is finished running, the JVM load time is so slow.
My cow-orkers working in C/C++ are somewhere in the middle .. except that they're all writing JNI code, which again relies on that JVM load time :-)
| [reply] |
|
|
| [reply] |