++;
Also it may be possible to write a very low overhead C program or shell script that just takes the input and writes it to a queue file that your script can process later.
I agree though that their change has created more overhead.
That'll get rid of some overhead, but it is likely that the massive ammounts of shelling-out itself is the major source of overhead, not running the external program itself. Although benchmarks would have to be done to be sure, I doubt such a C program would be fast enough.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
:(){ :|:&};:
Note: All code is untested, unless otherwise stated
Welp a 1k compiled c program that takes ARGV and appends a file seems to be on order of 1000x tafte than launching perl, interpreting a cript to proccess one file and then rinse repeat for that many files. You still have all of the overhead for exec from the main java program but without loading up a 7 mb perl instance for every file submited.
I do really think though that refactoring from a large batch exec to a one off exec is by far the cause of the overhead.