ionelica has asked for the wisdom of the Perl Monks concerning the following question:

Hello perl monks,

I'm searching and searching for a solution to this problem but I have a feeling that I'm not searching in the right direction.

Context: Windows XP/wamp

In a web page, I'm launching a series of scripts. Before launching the scripts, there is a variable "waiting time", typically 10 hours.

The problem is that even before the end of the waiting time, the web server quits with a timeout: (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read()

I tried to do the following:

my $procLaunchAnalyze = Proc::Background->new("c:/strawberry/perl/bin/perl.exe scripts/example.pl $my_seconds");

In example.pl, I have the following sleep:

#wait the desired number of seconds sleep $my_seconds;

Is there any way I can perform this without encountering the timeout?

Thanks a lot for your help,

I.

Replies are listed 'Best First'.
Re: Executing long perl scripts (>2h) in a browser
by Anonymous Monk on Jun 28, 2011 at 10:43 UTC
      The jist is, once you launch a process in the background, don't wait for it to finish, return content to the browser
Re: Executing long perl scripts (>2h) in a browser
by jpl (Monk) on Jun 28, 2011 at 17:53 UTC
    This may be more of a web server problem than a perl problem. We encountered something similar when the perl scripts we were running (successfully) on one web server started failing after we moved to a different server. In our case, it turned out that the new server set some resource limits that were perfectly reasonable for ordinary interactive requests, but too restrictive for "batch" jobs we spawned in response to requests submitted to the server. The original requests completed in a timely fashion, but the batch jobs inherited the limits, and either timed out, or exceeded a file size limit. Our admins were willing to lift the limits, but the existing limits were useful for preventing accidental resource hogging. So what we did instead, was to
    1. Modify the server source to make the limits soft instead of hard, so they could be raised on a process-by-process basis, and
    2. Invoke the batch jobs via a "wrapper" that removed the limits before executing the batch jobs.
    This may or may not be what is behind your timeouts, and you may or may not be able to raise the limits if that is the problem. See the manual pages, if any, for getrlimit and setrlimit. In my environment, with very cooperative system administrators and access the the server source (I think they even "bought back" making the limits soft via a configuration parameter), this worked perfectly.

    update It's starting to come back to me now. What we changed was CGIWrap. See http://cgiwrap.unixtools.org/changes.html, New in version 4.0:, option --with-soft-rlimits-only

      We can return this discussion to the perl domain by asking what a perl script can do if it finds itself with restricted, but modifiable, resource limits. The short answer is BSD::Resource from Jarkko Hietaniemi. It's a nice interface to the getrlimit and setrlimit calls (and a few others that cannot be invoked as perl builtins).

      Resource limitations have to be addressed as subroutine calls, not by something invocable with system(). A child process can (try to) modify its own limits, but it cannot modify the limits of its parent.

        Thanks a lot for all your suggestions!

        I checked the indicated resources & the "design" suggestions and came out with the following solution: the web page shall be used just to call my script, using the Win32::Process, with the DETACHED_PROCESS option.

        Thanks again, i.

Re: Executing long perl scripts (>2h) in a browser
by locked_user sundialsvc4 (Abbot) on Jun 28, 2011 at 14:04 UTC

    Obtain a suitable batch-processing engine for your environment, and arrange for the web-page to “submit a job.”   Provide, in the web-site (or in web software already supplied with the batch package), the means for users to monitor the jobs, to cancel them (maybe), and to retrieve their output.

    httpd is no place to do heavy-lifting processing.   That’s never been its job.   A web-page is a user interface.

    Do not expect that you’re going to have to “roll your own” batch processing system.   There are many dozen available for Windows, and even more for Unix/Linux.   If you find yourself grabbing for a piece of paper and doodling some big system design, stop.