Welcome to the Monastery | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
That is the easy part that our questioner already mentioned: fork or do $script; or unless (fork) { do $script; exit; } Performance gains here will depend on how well the system implements fork — all modern real operating systems use copy-on-write, so fork itself will be very quick, but each child will execute do FILE independently. This later step will mean that perl will still need to compile every script for each request, which is probably our questioner's actual overhead problem. The best solution is probably to refactor the Perl scripts into modules that can be loaded into the master process, duplicated with everything else at fork, and then executed quickly in the forked child. Another possible workaround for compiling the scripts may be B::Bytecode and ByteLoader, although they do have some limitations. In this case, you would want the master process to have already loaded ByteLoader before forking: use ByteLoader (); will load the module without calling its import method. In reply to Re^3: Use of do() to run lots of perl scripts
by jcb
|
|