Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I am using the below code to redirect a user to a webpage before the script has finished executing. The sample code below works fine.
defined(my $pid = fork) or die "Can't fork: $!"; if ($pid) { print "waiting"; waitpid($pid,0); print "done waiting"; } else { my $result= new CGI; print $result->redirect("redirection"); }
However, as you can see, I don't close the STDOUT or STDIN in the child process even though you are supposed to using the below code:
open STDIN, "</dev/null"; open STDOUT, ">/dev/null";
How big of a problem can this cause? Eliminating the above two lines seems to be the only way to get the output in a child process to output to the browser.

Someone also suggested that I print the rediretion in the parent process, then fork it, kill the parent process and let the rest of the script proceed in the child process. However, under this method, will I be able to catch the zombied child processes? Normally when you fork, you wait in the parent for the child to finish then continue. This way I'm killing the parent before the child finishes. Thanks.

Replies are listed 'Best First'.
Re: forking efficiency
by setantae (Scribe) on Jan 14, 2001 at 04:41 UTC
    It's possible that I'm missing something here, but I can't see why you would need to fork at all.
    After you've outputted your redirect then closing STDOUT should allow the web server to consider the output for the client finished and return the result to the browser, so after that your script can carry on and do whatever needs to be done in the background.

    At least, this works on my machine here.

    setantae@eidosnet.co.uk|setantae|www.setantae.uklinux.net

      Yeah, I had tried doing that earlier and it hadn't worked. However, I just tried it again and low and behold it worked. Now I don't have to deal with forking and sub processes.

      Thanks!
Re: forking efficiency
by merlyn (Sage) on Jan 14, 2001 at 08:46 UTC