clintonm9 has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,
I have a front end Apache server with a backend apache mod_perl server. There is a reverse proxy that points to the mod_perl server from the front end. On the front end server I have the Timeout setup to 120 (The number of seconds before receives and sends time out.).

The issue comes down to MySQL queries that take over 120 seconds. I know I could just increase this number, but I would prefer to handle this in the programming.

So this makes me think about using Fork and sending some data between the client and server, but I have heard a lot of issues with this on forking the Apache process. Also, the bigger issue I see is when I use fork behind a proxy that I have to send a certain amount of data before the proxy sees the data (Not sure why).

So here is what I am thinking below (Very rough, but I think you will get the point)?

$SIG{CHLD} = 'IGNORE'; my $child = fork(); if ( $child ) { # Parent # .... DO DB Stuff sleep 10; } elsif ( $child == 0 ) { # child for (1..100) { print " "; } ## Add in clean up stuff here maybe?? exit; }


So would something like this work if I made sure the ProxyReceiveBufferSize was reached? Also, I do not want to send spaces or line breaks b/c it makes the results look very strange.

I am hoping maybe there is an entire different way to do this!!

Please let me know, thanks!

Replies are listed 'Best First'.
Re: Mod_perl, reverse proxy, long query, timeout
by locked_user sundialsvc4 (Abbot) on Apr 29, 2011 at 13:30 UTC

    To me, the combination of mod_perl and “long queries” (or any other sort of computationally or time-intensive thing) is a vexing combination that is to be avoided.   I have never appreciated the way that mod_perl effectively thrusts the Perl interpreter into an httpd process instance.   A notion that “sort-of works sort-of okay” when the requests being handled are “uniformly clean and mean” starts to have real trouble, IMHO, when the mix of requests being handled starts to become, shall we say, more realistic.   We need to have a way to push such requests out to other processes, and that we simultaneously must have the means to govern it.

    I would start by looking carefully at fastcgi as an alternative to using mod_perl.   This pushes the request out to a service process that remains at arm’s length from the httpd process pool.   (Search for the term at http://search.cpan.org.)

    Then, for lengthy requests, I would suggest looking at a workload management system such as something built off of, say, POE or some other suitable request-queueing mechanism.   Construct the web-pages so that they can initiate requests and monitor their completion status from afar, but let the work itself be performed under the auspices of the workload manager.   In this way, if you determine that (say...) your hardware is capable of processing up to 15 requests simultaneously while providing acceptable completion-times for each, you can now “throttle” the system such that, no matter how many requests might be in the queue, the system never attempts to execute more than the specified number of requests at once.   (You can also now easily regulate how many requests, or how many requests of a particular type, a single user may admit into the queue at one time.)

    Now, http:// becomes (one of) the interface(s) for admitting time-consuming work into the processing system, but it (and Apache) cease to be the actual means of doing it.   I have had very good success with that little “paradigm shift.”