I must preface this by stating that I know nearly nothing - about Perl, computers, compilers etc. I can probably intimidate a bunch of Luddite horticulturists, but mostly I'm just guessing.
That said, I hear a lot about Perl not having great performance, as compared to compiled languages like C or C++. I guess I'm wondering why, if you have a Perl program that you like, you could not get the interpreter/compiler to render your script in architecture-dependant machine code. Am I missing the point here? Is Perl machine code fundamentally slower than compiled C? Or is the performance hit mainly a function of the interpreter/compiler steps' overhead? If Perl's poor (relative) performance is just overhead issues, why couldn't you front-end the overhead? Sure, you'd need architecture- and system- dependant binaries, but as far as I can see, we do that already.
UPDATE:
I've been unclear. Allow me to try again:
As I understand it, Perl must be interpretted, rendered as bytecode, and executed. The question is this: how good is that bytecode, and similar/different is it from a compiled binary?
I am asking because if the bytecode generated is good, why can't we distribute the bytecode rather than the source - skipping a run-time step and improving the performance along the way?
In reply to High Performance Perl by willyyam
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |