rsiedl has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks,

I have a question about how to fork a process into the background via a cgi script.
I thought that the following code would do what I wanted but the page keeps loading for as long as the child is running...
Can anybody shed some light on what I'm doing wrong.
#!/usr/bin/perl use strict; use warnings; use CGI qw(:all); use Proc::Fork; parent { print header; my $child_pid = shift; print "child [$child_pid] is sleeping."; } child { sleep 20; };
Cheers,
Reagen

Replies are listed 'Best First'.
Re: Forking via CGI
by Anonymous Monk on Apr 18, 2006 at 06:53 UTC
Re: Forking via CGI
by zentara (Cardinal) on Apr 18, 2006 at 12:30 UTC
    forking cgi.

    And

    #!/usr/bin/perl use warnings; use strict; #Benjamin Goldberg #The Apache process does NOT wait for the CGI process to exit, it wait +s #for the pipes to be closed. (Note that if the CGI is a NPH script, t +hen #it does waitpid() for the script to exit). #Thus, you do not really need a child process -- you can have your CGI + #process transmute itself into whatever you would have the child proce +ss #do, by doing something like: $| = 1; # need either this or to explicitly flush stdout, etc. + print "Content-type: text/plain\n\n"; print "Going to start the fork now\n"; open( STDIN, "</dev/null"); open( STDOUT, ">>/dev/null"); open( STDERR, ">>/path/to/logfile"); fork and exit; exec "program that would be the child process"; #Apache waits for all children to close the pipes to apache. It #does not wait for them to actually exit -- think ... how in the world + #could it *possibly* know that those processes were really spawned fro +m #the CGI process? Answer: It can't. It can only know that *somethin +g* #still has an open filedescriptor which points to one of the pipes tha +t #apache created to capture the CGI's stdout and stderr. As long as on +e #of these pipes is open, then apache waits.

    I'm not really a human, but I play one on earth. flash japh
Re: Forking via CGI
by jonadab (Parson) on Apr 18, 2006 at 12:03 UTC

    Forking in a CGI script can be problematic, because typically when the parent process ends, the OS will hold it in limbo and not let it entirely exit, until the child exits. However, the web server typically will not send anything to the client until the process it launched (the parent process) exits. I believe there are ways to work around this (by getting the two processes detached from one another, or by somehow telling the web server to go ahead and send what's been output so far) but I've never done it successfully. Generally there is another way to solve the problem, such as by using some kind of inter-process communication (e.g., sockets or signals) to have the CGI script request some action of an always-running server process (and send the client a "view results" link), or sending Javascript to the client that makes asynchronous requests back to the server.


    Sanity? Oh, yeah, I've got all kinds of sanity. In fact, I've developed whole new kinds of sanity. Why, I've got so much sanity it's driving me crazy.