in reply to Perl / Apache 2 / Alarms

I can get no http output until each of the kicked-off scripts has finished.

The important thing here is to close the standard file handles in the child processes. In particular stdout and stderr — stdin is not an issue here, as the associated pipe is for sending data from Apache to the CGI script, and when done, Apache takes care of closing this pipe itself.

Background:  when Apache runs an external program (CGI script) to generate content, it creates three pipes to that process:

--------- ----------------- | | form data | | | | -------------> | stdin | | | | | -----> | | HTML content | CGI | browser | Apache | <------------- | stdout script | <----- | | | | | | error msgs | | | | <------------- | stderr | | | | | --------- -----------------

The first one, which is connected to stdin of the CGI script, is used for sending form data, etc. to the script. The second one, which is connected to stdout of the script, is used for reading any content the script produces. And the last one, connected to the script's stderr, is used to read error messages, if there should be any. (This by default ends up in the webserver's error log, but may also be redirected to the browser.)

When the CGI script creates further child processes via fork or fork/exec (and system also is fork/exec under the hood), all file handles are duplicated due to the fork

--------- ----------------- | | form data | | | | -------------> | ----------------- | | -----------------> | | -----> | | HTML content | | | browser | Apache | <------------- | | | <----- | | <----------------- | child | | | error msgs | | process | | | <------------- | | | | | <----------------- | | --------- ---| | -----------------

and Apache will wait for all of them to be closed, before it stops reading from the script's stdout/stderr and consideres the dynamic content generation to be finished (Apache has to wait because it cannot tell when the script has finished generating content — which is normally indicated by the script closing the pipe).  So, unless the child processes do close the handles themselves, this won't happen before the processes terminate.

In short, with long running background processes, you have to close stdout/stderr yourself (or redirect them to a file, if they are being used).  As system provides no direct way to manipulate the duplicated file handles, it's usually easiest to explicitly fork/exec, which gives you more control:

my $pid = fork; die "fork failed: $!" unless defined($pid); if ( $pid == 0 ) { # child close STDOUT; # <--- !! close STDERR; # <--- !! ... exec $program, @args; die "exec failed"; }

Another thing to consider is that the background processes typically won't automatically terminate when the alarm fires. So if you want them to terminate, just kill them yourself (which is easy, as you have their PIDs).  And in case the background processes do (or may) fork further childrem, it's usually best to create a separate process group for them (setpgrp), and then kill the entire group by sending a negative signal number to the ID of the process group.

Also, in case you should be running in a persistent environment (FCGI, mod_perl), don't forget to wait/waitpid for the terminated child processes, or else zombies will accumulate. (For a regular CGI script, this is not an issue, because as soon as the respective parent (the main CGI script) terminates, any of its zombies will be taken care of by the OS.)

Replies are listed 'Best First'.
Re^2: Perl / Apache 2 / Alarms
by DanielSpaniel (Scribe) on Dec 29, 2011 at 13:12 UTC
    Thanks very much for that extremely helpful post Eliya. Much appreciated. I'm not at my machine right now, but will study your comments more closely later on when I am, and make sure I close stdout/stderr. Thanks also for the excellent diagrams!
Re^2: Perl / Apache 2 / Alarms
by DanielSpaniel (Scribe) on Dec 29, 2011 at 15:29 UTC

    Yes, that's excellent. I now have it working perfectly, thanks to your suggestions, and the code snippet provided. It's doing exactly what I need it to!

    Thanks again.