clintonm9 has asked for the wisdom of the Perl Monks concerning the following question:

I have written a script that uses DBI to talk to a MySQL. One requirement I have is to allow a sql statement to be ran for more than a few hours from the web browser. To accomplish this I used a fork to send a warn every 90 seconds. Since the Apache timeout is set to 120 seconds the script never times out. See example below
my $pid; $SIG{CHLD} = 'IGNORE'; unless($pid = fork) { while(1) { sleep 90; warn "Output taking longer then 90 seconds, waiting"; } exit; } ….. Do DBI stuff kill(9,$pid);
So the problem that I am having is if the user stops the connection the process is still running. The process won’t stop until the SQL statement is completed. The bigger program is when all the tables are locked these process start growing until there are too many for the server to handle. So my question is, is there a way to stop these scripts if the user disconnects. I would think apache would see the socket is broken and could kill the Perl script. Or is there a better method to use to accomplish the same thing? Thanks in advance!

Replies are listed 'Best First'.
Re: Perl and Apache… Process keeps running.
by Anonymous Monk on Jan 14, 2010 at 04:31 UTC
      i think this will still have the same problem I mentioned. If you leave the page it will still keep running.. from the column: Also, the child process has no awareness if the parent is finally disinterested, and continues merrily chugging away to produce a result that no-one will see. Perhaps that can be fixed in another revision of the program. But until next time, enjoy!
        Um, the technique shows you how to separate the actions of the parent from those of the child without hanging the webserver. This is the most important part. Adding a kill button is SMOP, 5 lines at most
        • store child pid
        • detect kill request
        • get pid
        • kill pid
        • cleanup or whatever
Re: Perl and Apache… Process keeps running.
by scorpio17 (Canon) on Jan 14, 2010 at 14:32 UTC

    If you have a sql statement that takes HOURS - then you need to spend some time fixing your database! You should be able to do complicated multi-table joins on tables having millions of rows in only a fraction of a second. Maybe you need a better indexing strategy?

    But if it can't be fixed - then it sounds like you're trying to "web-ify" an app that just isn't well suited for the web. Consider setting up a batch process system instead. The web app will let users submit a job request. Once a job is created, they can go do something else. Later, they can return to check on the status of their job (pending, running, completed, failed, etc.). This way all the web app has to do is write a line of text into a "todo" list. You can setup a cron job to check the list every 15 minutes or so and begin processing any new jobs that it finds. You can add a "kill job" button the the "check status page", etc.

    In other words, it sounds like this isn't the kind of job you do in "real time", even if you're sitting at a mysql prompt - so don't try to make it "real time" via a web interface.