Actually, a while back (1996?), there was a guy working a Perl compiler which was specifically optimized for use with gcc (yacc). This compiler parsed to generate C code or optionally assembly (via gcc). I even believe that it did not embed a parser in the final product though, except for evals (that was the last time I checked, so I may be wrong now). Supposedly, the guy was working on some heavy optimizations which would make Perl code much more comparable to C code (concerning speed) and it even supports making .so of .pms which is just plain cool. I don't know if there's progress on it, though. Malcolm Beattie, the author has also created
B::C and
B::CC specifically for the purposes of optimizing his compiler. All in all, it's pretty cool, since it results in what I like to call "assembler obfuscation" which would prevent those pesky clients from screwing with a chmod they imagine is unsafe which damages a whole bunch of directories (personal experience) or Ovid's boss from writing
File::Find which erases the entire development environment. Of course, I expect many arguments as to why perlcc may be useful or not, but the way I see it is that this guy has put lots of effort into a cool program, so its at least worth a try. In at least one distro (the one I have <5.6),
perlcc is included by default (or was it thrown in by someone else on my linux cds?) I dunno, but it's not something to overlook and I hope Mr. Beattie is continuing his work on it- I can't be sure since the docs are somewhat outdated.
If you're wondering about possible advantages of an interpreter over compiled code...then the basic arguments are these-
- interpreter is slower than a compiler- while this is generally true, the advantage of using a friendly language like perl outways this argument. Also, using mechanisms like FastCGI or mod_perl can easily cut down on time- alot (for CGIs that is).
- compiled code is not hhuman-readble- again, depending on the situation, this may or may not be useful.
Back in the stone age, most code HAD to be compiled simply because computters (sic) were so damn slow. Ridiculous assembler optimizations and SISC tricks were employed to cut code size and increase speed. BASIC only became popular since it was so damn easy- though insanely slow to parse and run. Nowadays, code size
doesn't really matter and speed, while still a concern, is not such a GREAT concern since it doesn't matter anymore if the user waits an extra tenth of a second long for the same operation that took an annoying 20 seconds ten years ago. So, I see that the farther we go along, the more and more compiled code will be used as a code-hiding mechanism rather than simpy a speed improvement. For example, the fact that M$ wrote Windows in Visual Basic (a historically interpreted language) shows that they showed very little concern for speed or any other type of optimization.
More info (docs) here.
Embedding Perl in C is insanely simple- look at perlembed- the result is a compiled program with an embedded perl parser which will happily run any text that you pass to it.
NOTE: Since it is outdated, this information may not be relevant or even correct. Parental discretion is advised. Please correct me if I'm wrong and reply with any new information since it would certainly interest me, too. Thanx.
AgentM Systems nor Nasca Enterprises nor
Bone::Easy nor Macperl is responsible for the
comments made by
AgentM. Remember, you can build any logical system with NOR.