drinkd has asked for the wisdom of the Perl Monks concerning the following question:

Monks-

I have a perl script that uses cgi.pm to take some input, do some (fairly heavy) calculations and return a few lines of calculated answers.

Everything works fine on my local Apache web servers, but after uploading the script to my ISP to make it available to the net, the process dies for long calcs.

Any time the calc time goes over about 2 min, the page just stops uploading without any errors and says done. Is there a strategy I can use to find out what kind of limit I am running up against? Perhaps they have some kind of time or cpu usage limit implemented? My script is recursive so it could be easily "parallelized". Should I try fork()?

Perhaps there is a script available that will send to the web client a "top"-like real time display?

Thanks,

drinkd

Replies are listed 'Best First'.
Re: ISP process limits
by sevensven (Pilgrim) on Oct 29, 2001 at 20:30 UTC

    You could also be hiting the Apache timeout configuration value, since I think ulimit is for all the process and cannot be aplied to just a process (or class of process, like Apache childs).

    By default, Apache comes with a 300 seconds timeout for browser/server comunication, as can be seen from a httpd.conf :

    #
    # Timeout: The number of seconds before receives and sends time out.
    #
    Timeout 300
    

    If this is the case, then there's an article writen by Randal Schwartz (merlyn here at PM) for Web Techniques (hi Randal, love your articles ;-) that talks about how to have a browser keep asking for incremental results whitout actualy being always connected and hiting the timeout.


      Resource limits reported by ulimit can be configured per process and are inherited by child processes. See getrlimit(2) for more information, and BSD::Resource for how to manipulate limits from perl. On some BSD based OSen login.conf(5) may also be of interest.

Re: ISP process limits
by Fletch (Bishop) on Oct 29, 2001 at 19:49 UTC

    Try seeing what ulimit reports. The exact output and syntax varies, but for bash on a RH 7.1 box says:

    $ ulimit -a core file size (blocks) 1000000 data seg size (kbytes) unlimited file size (blocks) unlimited max locked memory (kbytes) unlimited max memory size (kbytes) unlimited open files 1024 pipe size (512 bytes) 8 stack size (kbytes) 8192 cpu time (seconds) unlimited max user processes 10190 virtual memory (kbytes) unlimited

    It wouldn't be surprising if they've got CPU time throttled from what you're saying. If this is the case and the limits are applied to all Apache children, then you'll either have to talk them into upping your limit or have the CGI submit the request to something else that's not running under such limits (i.e. have a persistent process not started from Apache which does the calculations and stores the results for later retrieval).

      Thanks, Fletch

      I didn't know about the ulimit command.

      If it is running with a limit on "all Apache children" would that apply to just child processes of that script?

      That is to say, could I have the client submit 10% of the job 10 times to be reported in 10 different frames, and then use Javascript to yank all the results back together after they're posted. This is not something I have seen done, but should be possible, no?

      drinkd

        BTW, if you only have FTP access (no telnet), you can still run arbitrary UNIX commands. See my description here.

        Only slightly off-topic, I hope...

        dmm

        
        You can give a man a fish and feed him for a day ...
        Or, you can teach him to fish and feed him for a lifetime