in reply to Fast Recall

Perhaps the very simplest mechanism would be to write an empty file with the failing IDs as filenames into a local directory when you read them.

Each time you read a successful ID, you can use -e to see if it has had a previous failure. If it has, you can now unlink that file.

Later, you can then use the creation timestamps on the files to delete any that are more than 8 hours old. This could even be done by a separate process that scans the directory on a regular basis via cron.

It's simple, persistant, requires nothing beyond Perl itself, and should be easily fast enough to cater for 6 lookups per second.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP an inspiration; A true Folk's Guy

Replies are listed 'Best First'.
Re^2: Fast Recall
by pemungkah (Priest) on Sep 03, 2010 at 04:18 UTC
    This should indeed keep up just fine, and it has the advantage that this is probably more understandable to the Big Iron folks, as it's very similar to the kind of thing I used to use when writing TSO scripts back when. It was faster to alloc and delete files than it was to allocate, open, write, and close repeatedly.