in reply to Vexing Race Condition

Completely aside from your question but very related to your problem, I solve a similar task with my module Log::Progress. I run my long-running task as a pair of processes - the parent sets up the environment and runs the main worker process, while monitoring the log file which is the child process's STDOUT and STDERR. The long-running script just keeps writing to STDOUT without ever locking or touching file names. The parent uses Log::Progress to incrementally process the output file, building a Progress data structure that describes the overall progress, and progress of sub-tasks. When the child exits for any reason, the parent gets to capture the exit code and even see if the child aborted with a low-level error like libc running out of memory. Meanwhile, the parent is writing updates to a database record, and the web workers report the progress to clients reading from that database record. If you are using Postgres, you can even have a trigger on that table which pushes notifications through a websocket to the clients :-)

Another technique to read log files is to write javascript that makes HTTP Range requests to just keep looking for additional data on the end of the file. Then you don't even need a perl handler, apache can serve it for you. Of course, this assumes that it's OK for users to see the entire stdout of a background process, which in many cases could contain secret data.

That might be way more complication than you want to get into for this project, but I figured it was worth mentioning some of your other options.