in reply to Hit tracking optimization...

The problem you may have is when two or more of these all try to run at once. The usual solution to that is based on flock.

Another way to deal with concurrency that I've used in the past is to write each item to a unique file in a particular directory. Then a single cron/batch job comes along later and processes each file, unlinking them as they're finished. The only race condition is if the batch processor tries to work on a file the writer hasn't finished writing yet. I'd avoid that by not working on any file that's less than (say) a minute old.

Replies are listed 'Best First'.
Re^2: Hit tracking optimization...
by MidLifeXis (Monsignor) on Jan 24, 2008 at 20:30 UTC

    You can also address that using the method that qmail uses: create tmp file in another directory on the same device, finish writing, hard link into your work directory with the inode number as the file name on completion, and unlink from tmp directory.

    --MidLifeXis

      I like the sound of that, sounds simple and effective, I've done something similar in the past for an SE script. I'll use the flock for now, but if I find problems in the future I'll try this.
Re^2: Hit tracking optimization...
by cosmicperl (Chaplain) on Jan 25, 2008 at 00:24 UTC
    I do flock in the version I'm using. Should have mentioned it. Bu thanks for the heads up. Probably should also mention that I do a seek after the flock just encase the file changed while waiting for a lock:-
    flock(OUTF,2); seek(OUTF,0,2);