rivandemo has asked for the wisdom of the Perl Monks concerning the following question:

In my projects, we develop real-time software to communicate with hundreds of remote systems and handle the events from or commands to them (average : 35000 events/day). Each type of remote system communicate through a dedicate program(s) in our system. We intend to re-design those programs so as to create them from a generic skeleton (C language) in which system-type specific handling routines would then be plugged in. Prior to integrate them, we would like to develop and test those routines with scripts (e.g.Perl). Now comes my question: once a Perl script is ready-for-production, how do i get it in the executable? Can i compile the Perl scripts into object modules (*.o) and link them with other *.o or should i first "cross-compile" them from Perl to c? Thank you
  • Comment on Re-using Perl script into compiled programs

Replies are listed 'Best First'.
Re: Re-using Perl script into compiled programs
by mugwumpjism (Hermit) on Feb 14, 2003 at 12:44 UTC

    On my system, running perl -le 'print "Hello, World\n"' takes approximately 3ms, whereas the C equivalent is 1-2ms. Do you really need that last bit of speed? If you're not using any bloated modules, startup time of the Perl interpreter itself can be quite fast. If it's compiling a small (say, <1000 line) program, even on a 386 or sun4 machine, it takes less than a second.

    Assuming a lot about your usage, practical experience says that you can pretty much keep `action scripts' or `agents' for remote management functions in Perl. This is the case with Tivoli for instance, which installs Perl onto every endpoint so that action scripts written in Perl are guaranteed to work. In fact, the worst performing agents in Tivoli were written in C; I had to re-write one or two of them in Perl, where the C program's mode of operation was so brain-dead that the interpreted solution with a better algorithm used less than 10% of the cpu time of the original.

    Another option that permits a production Perl codebase is to have a persistent Perl daemon, that is signalled of events to by an external script or program with a lower latency - either from a real UNIX signal, writing to a named pipe, or perhaps a C program that collects the information in a minimalistic fashion and hands it to the running Perl daemon to process (starting that daemon if necessary, of course).

    When it comes to performance, algorithms are key. Saying `C' is categorically faster than `Perl' shows a basic ignorance of the complete picture. When it comes to maintainability, code simplicity is the key. Therefore, keeping your core program logic in Perl gives you ultimate maintainability, and the flexibility to move to high performance algorithms without hundreds of hours of C coding. Since Perl runs in so many places, you then don't have to worry about maintaining ports of your program into different languages. Remember, the bare minimum you need on the client system to run a standard Perl script that uses no modules at all is the perl binary.

    Otherwise, you can try embedding the Perl interpreter in your program, building a shared libperl.so that you link with (of course those two options are almost the same), or even getting your program to dump core on itself, turning the resultant core file into an executable with `undump'. My understanding is that the Perl to C compiler is still very experimental.

Re: Re-using Perl script into compiled programs
by steves (Curate) on Feb 14, 2003 at 10:47 UTC

    The perlembed docs. are the place to start.

Re: Re-using Perl script into compiled programs
by Corion (Patriarch) on Feb 14, 2003 at 12:39 UTC

    If you can control the environment good enough, the Inline and Inline::C suite might be interesting to you - it allows you to easily call C code from within Perl, so you can write your tests in Perl and test C code with them. It also allows you to call Perl code from C, so you can implement parts in Perl first. They also keep the compiled C code around, so the second invocation is fast.

    From what I know, it's also possible to redistribute the cached compiled code from a first invocation, so you could even distribute a binary-only version.

    There is no real Perl -> C compiler, so you will have to reimplement all your Perl stuff in C, if you really need that. But the tests you wrote when writing your Perl stuff can still work when you move over to C, if you use Inline::C and keep the API you specified from Perl.

    perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
Re: Re-using Perl script into compiled programs
by Abigail-II (Bishop) on Feb 14, 2003 at 12:45 UTC
    I assume you are using C for the speed advantage. And while there are ways to turn Perl code into C code, you are not gaining any speed advantage that way.

    The usual port from Perl to C goes via a text editor.

    Abigail