in reply to Persistent timed events
I'd consider using the file system.
Each record becomes a file, named for the time of its due time + a number* to distinguish time clashes, stored in subdirectories named by date to avoid the "huge directory" problem.
*Say, an MD5 of the date/time/processid/record contents.
The actioning server would use atomic rename to remove the next entry from the due queue before reading and actioning it, then deleting it.
If your action times need not be precise, say 1 minute granularity, then approx.1440 files per directory should make for a reasonable glob-sort-rename-the-top-entry action, and minimise the need to retry with the next-to-top entry should another actioning server have grabbed the top one first (assuming that there might be a need for multiple servers).
Another method of avoiding server collisions I've used before, is to have each server only glob the file system on a given 5 or 10 second boundary of each minute. Ie. Have up to 12 servers with the first waiting until 5 seconds past each minute before scanning the appropriate directory, the second 10 seconds past the minute and so on.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Persistent timed events
by tirwhan (Abbot) on Nov 03, 2005 at 14:45 UTC | |
by BrowserUk (Patriarch) on Nov 03, 2005 at 15:57 UTC |