You are correct!
I added a close STDOUT and a close STDERR before the exec command and now the webpage displays 'Done' immediately.
Thnaks for your quick reply.
Paul McIlfatrick
| [reply] |
I was too quick with my reply basing it on how quickly the browser displayed 'Done' and I had not checked the processed files - there had been no processing of the files.
Adding a close STDOUT and a close STDERR before the exec command prevented the exec command from running and so no files were processed.
Putting a close STDOUT and a close STDERR after the exec command results in the same long delay before the browser displays 'Done'.
Paul McIlfatrick
| [reply] |
If the long-running process needs those handles (as it apparently does), try re-opening them to some file (before running the subprocess).
In some more detail, the undelying problem is that the implicitly forked subprocess running under nohup is getting duplicates of the file handles of the parent process (your main script) at the time of the fork, and all "instances" of those file handles - which are connected via pipes to the web server - will need to be closed (or re-opened to elsewhere), before the web server considers the CGI job completed.
-The other Anonymous Monk
| [reply] [d/l] |