This may be more of a web server problem than a perl problem. We encountered something similar when the perl scripts we were running (successfully) on one web server started failing after we moved to a different server. In our case, it turned out that the new server set some resource limits that were perfectly reasonable for ordinary interactive requests, but too restrictive for "batch" jobs we spawned in response to requests submitted to the server. The original requests completed in a timely fashion, but the batch jobs inherited the limits, and either timed out, or exceeded a file size limit. Our admins were willing to lift the limits, but the existing limits were useful for preventing accidental resource hogging. So what we did instead, was to
-
Modify the server source to make the limits soft instead of hard, so they could be raised on a process-by-process basis, and
-
Invoke the batch jobs via a "wrapper" that removed the limits before executing the batch jobs.
This may or may not be what is behind your timeouts, and you may or may not be able to raise the limits if that
is the problem. See the manual pages, if any, for
getrlimit and
setrlimit. In my environment, with very cooperative system administrators and access the the server source (I think they even "bought back" making the limits soft via a configuration parameter), this worked perfectly.
update
It's starting to come back to me now. What we changed was CGIWrap. See http://cgiwrap.unixtools.org/changes.html, New in version 4.0:, option --with-soft-rlimits-only