in reply to RE: CGI Forcing client disconnect

Use fork to create a child process to do your meatie stuff.

cLive ;-)

Replies are listed 'Best First'.
Re: Re: RE: CGI Forcing client disconnect
by shotgunefx (Parson) on May 15, 2001 at 04:07 UTC
    I am trying to avoid forking if possible due to the large number of transactions involved.

    The other (much larger systems) sometimes get temporarily blocked for a few hours due to internal issues and then hits us 16 times a second until all requests are satisfied or the system gives up reposting which can be as long as 8 hours.

    We found this out the hard way when our server started thrashing late one Friday night.

    The systems sent so many requests that the server took slightly longer to close the connection,the other systems didn't wait long enough to hear the answer. So it posted them all again, each one, every minute which as you can imagine started a big downward spiral.

    The Posting systems in question are not under our control and won't likely be modified anytime soon.
      OK, so how about you queue the request (append to a text file) and send a '200 OK' header.

      Then set up a cron job that runs every X minutes (5? 10? 30?).

      This is how I'd do it. cron activated script:

      • copies file of actions to temp file (to avoid issues of incoming requests corrupting data
      • open temp file and action requests
      • delete temp file
      • open original file (using flock, of course :), remove actioned requests and rewrite
      This way, the request data file doesn't get locked for long :)

      cLive ;-)

        That's what we do for some of the requests but the problem I am having currently is that we need to pipe some of these through an encryption program such as GPG before writing to disk as they contain sensitive information.

        BTW, Thanks for the help!