but I can't see how to reduce this memory requirement to only 4 megabytes.
I said ~ 8 megabytes not 4.
In Scaling Hash Limits the OP said: "my simple hash of scalars (for duplicate detection) hits a size of over 55 million"
55 million / 8 / 1024**2 = 6.55651092529296875 MB.
He also mentions 180 million: 180e6 / 8 / 1024**2 = 21.457672119140625 MB. But that's before de-duping, the purpose of the exercise. But its possible his list contains no duplicates.
Of course, looking around I see that twitter uses 64-bit numbers for their user ids. And that it 20 digits not 12. Then again, they are only just now claiming 500 million users which is: 500e6 / 8 / 1024**2 = 59.604644775390625 MB which should be handleable by any modern machine with ease.
Of course, it is also possible that they do not use sequential numbers for their IDs, but rather the 64-bit number is a hash of some aspect of the account -- the name or similar -- in which case the idea won't work because 2**64 / 8 / 1024**3 ~= 2 billion GB.
But if that were the case, the OP probably wouldn't be talking about "12-digit numbers".
Of course, the OP also doesn't explicitly mention 'user' ids, just "ids", and given the number -- 180 million; roughly the number of twits per day currently -- these could be message ids; which probably are allocated sequentially?
Had the OP deigned to answer a couple of questions, much of the uncertainty could have been resolved.
In reply to Re^5: Scaling Hash Limits
by BrowserUk
in thread Scaling Hash Limits
by Endless
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |