http://qs1969.pair.com?node_id=95777

deprecated has asked for the wisdom of the Perl Monks concerning the following question:

I've got another project (aside from the site audit I do) that involves a very lengthy and complex process of turning tiff's into PDF and XML and a couple other proprietary formats.

At the moment we're using a small cluster of Ultra 10's with tape magazines and we're processing about 200gb of data per week. We run all of this data through, you guessed it, perl. However, these are not necessarily perl daemons. We do have daemons running, but generally we have wrappers for each step of the process that wind up creating as many as 50 or 60 perl processes. The load on these U10's can go up and over 20.0. This causes problems where the network card dies.

I'm not sure if thats a Sun kernel issue or what. But I was talking to our lead solaris guy about it today, and I got to thinking about how to possibly reduce the load on the machine. Or, at least, the overhead of having so many perl executables.

Why not use something like mod_perl?

My thought was, well, rather than have a new 3.5mb perl executable running, why not have a single executable running (or even a couple) with a named pipe or a socket, and just feed it snippets of code to run?

This seems cool. I dont have to worry about security since there are only a couple of us running scripts on these machines, and we're deep in firewall land.

Recently I was working on a project involving an object that I could add codeblocks to for later execution.

I think a big problem would be making a new namespace for it and letting it behave as though it was running in *main::.

Has anyone written something like this? I'm thinking of whipping up a prototype to see how it behaves, but I'd like some input. <!- but not from you, tilly... ->

thanks
brother dep.

--
Laziness, Impatience, Hubris, and Generosity.