I had a similar requirement a few years ago and (back then) the fastest mechanism available to me that provided shared access and fast lookup, was to use the file system.
For sake of discussion, assuming that your userids consist of mixed case ANSI alphanemerics -- ie. 62 chars. If you have 10 million users and use the first 3 characters in their names as an index into a first level of subdirectories, you'll have (on average) 42 users in each second level subdirectory -- so lookup is fast.
The directory structure looks like this:
/yourapp/index/ash/ashford/7/ /bre/brent/3/ /cra/crawford/4/
And the process of lookup/increment is:
my $prefix = '/yourapp/index'; my $userid = ...; my $idx = substr $userid, 0, 3; my $limitReach = 1; { opendir DIR, "$prefix/$idx/$userid/" or die $!; my $count = readdir DIR; last if $count >= LIMIT; rename "$prefix/$idx/$userid/$count", "$prefix/$idx/$userid/" . $c +ount + 1 or redo; $limitReached = 0; } ## use $limitReached to decide further action
If your data is to persist, you are going to have to do at least one directory lookup to find the DB file -- and usually more than one -- so the directory look is effectively free. And as rename is atomic, the shared data problems are taken care of without the need for time-costly, locking and polling.
The more characters in the alphabet available for your userids, the more well spread your directory structure and the faster the lookups. The only real restriction is that the alphabet must be compatible with your file systems naming conventions, which isn't usually a problem.
In reply to Re: Daily Counters
by BrowserUk
in thread Daily Counters
by docbrown25
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |