in reply to Mod_perl, reverse proxy, long query, timeout
To me, the combination of mod_perl and “long queries” (or any other sort of computationally or time-intensive thing) is a vexing combination that is to be avoided. I have never appreciated the way that mod_perl effectively thrusts the Perl interpreter into an httpd process instance. A notion that “sort-of works sort-of okay” when the requests being handled are “uniformly clean and mean” starts to have real trouble, IMHO, when the mix of requests being handled starts to become, shall we say, more realistic. We need to have a way to push such requests out to other processes, and that we simultaneously must have the means to govern it.
I would start by looking carefully at fastcgi as an alternative to using mod_perl. This pushes the request out to a service process that remains at arm’s length from the httpd process pool. (Search for the term at http://search.cpan.org.)
Then, for lengthy requests, I would suggest looking at a workload management system such as something built off of, say, POE or some other suitable request-queueing mechanism. Construct the web-pages so that they can initiate requests and monitor their completion status from afar, but let the work itself be performed under the auspices of the workload manager. In this way, if you determine that (say...) your hardware is capable of processing up to 15 requests simultaneously while providing acceptable completion-times for each, you can now “throttle” the system such that, no matter how many requests might be in the queue, the system never attempts to execute more than the specified number of requests at once. (You can also now easily regulate how many requests, or how many requests of a particular type, a single user may admit into the queue at one time.)
Now, http:// becomes (one of) the interface(s) for admitting time-consuming work into the processing system, but it (and Apache) cease to be the actual means of doing it. I have had very good success with that little “paradigm shift.”