shotgunefx has asked for the wisdom of the Perl Monks concerning the following question:

This problem may be more of an Apache problem but I'm not sure. I hope this is not inappropriate to post this here.

I have a CGI script running on Perl/Apache/Linux that receives a GET request from another server that simply needs to respond with 200 OK or 500 HTTP Header if there is an error.

The system that is calling this script only waits about 10 -20 seconds (not changable). The problem I have is that I need to do some heavyweight processing. What I want to do is simply tell the client 200 OK "go away now" as the server calling our script would not recognize any other response or data anyway.

The problem is that the client is still hanging around until the script is completed.
I have disabled buffering and tried using the script as an nph script with no luck. The only way I can force the client to leave is by close(STDOUT), this feels like a big hack to me and I'm sure it will cause a problem at some point. Forking is not really an option either. I tried copying STDOUT, printing the header and closing STDOUT and restoring as follows with no luck,
#!/usr/bin/perl # @@@ THIS DOESN'T WORK @@@ require 5.001; $|=1; my $query = new CGI; open (OLDOUT, ">&STDOUT") or die "Couldn't dupe STDOUT!\n"; print $query->header(-status=>'200 OK'),"\n"; close(STDOUT); open (STDOUT,">&OLDOUT") or die "Couldn't restore STDOUT!\n"; close(OLDOUT); sleep(20); print "GO AWAY!;
The result is that the calling server will get the message "GO AWAY"
#!/usr/bin/perl # @@@ THIS DOES @@@ require 5.001; $|=1; my $query = new CGI; print $query->header(-status=>'200 OK'),"\n"; close(STDOUT); sleep(20); print "GO AWAY!;
The calling server will terminate immediatelly after the 200 OK.

Anyone else here ever have a similar problem or a solution for a more elegant fix?

Thanks.

Replies are listed 'Best First'.
Re: RE: CGI Forcing client disconnect
by cLive ;-) (Prior) on May 15, 2001 at 03:48 UTC
    Use fork to create a child process to do your meatie stuff.

    cLive ;-)

      I am trying to avoid forking if possible due to the large number of transactions involved.

      The other (much larger systems) sometimes get temporarily blocked for a few hours due to internal issues and then hits us 16 times a second until all requests are satisfied or the system gives up reposting which can be as long as 8 hours.

      We found this out the hard way when our server started thrashing late one Friday night.

      The systems sent so many requests that the server took slightly longer to close the connection,the other systems didn't wait long enough to hear the answer. So it posted them all again, each one, every minute which as you can imagine started a big downward spiral.

      The Posting systems in question are not under our control and won't likely be modified anytime soon.
        OK, so how about you queue the request (append to a text file) and send a '200 OK' header.

        Then set up a cron job that runs every X minutes (5? 10? 30?).

        This is how I'd do it. cron activated script:

        • copies file of actions to temp file (to avoid issues of incoming requests corrupting data
        • open temp file and action requests
        • delete temp file
        • open original file (using flock, of course :), remove actioned requests and rewrite
        This way, the request data file doesn't get locked for long :)

        cLive ;-)