in reply to Ensuring the user doesn't have to wait for a CGI to finish

The general problem you're facing is that of returning a response (in the form of an HTML page) to the user while work continues on the server side.

There are a couple of strategies for this. merlyn points to one of them, which is to have the CGI fork, and continue work in the child process while the parent process exits. Works great on Unix systems, works very poorly on Win32 (IIS).

Another approach is to write a separate server process to handle work requests. The CGI queues up a request to this new worker process, either by dropping a file into a well-known directory, or by opening a socket to it and pushing the request along. The CGI pushs and HTML page back to the browser and exits, while the worker process grinds through requests.

This latter approach works well when you have control of the server. Some ISPs forbid running your own server processes.

  • Comment on Re: Ensuring the user doesn't have to wait for a CGI to finish

Replies are listed 'Best First'.
Re^2: Ensuring the user doesn't have to wait for a CGI to finish
by Aristotle (Chancellor) on Jul 11, 2002 at 21:09 UTC
    On Unix boxen you may be able to use a simple quick and dirty solution:
    open AT, "|at now" or die "Failed piping to 'at': $!"; print AT "$some $commandline $to_execute\n"; print AT "$or $maybe $two\n"; close AT
    Of course I didn't tell you that. :-)

    Makeshifts last the longest.