Back-of-a-napkin benchmarks are, well, worth about the same as a used napkin. I would suggest grabbing MySQL or PostgreSQL and trying it out. An RDBMS would certainly be the solution I would choose, given what you have described of the problem. (Not to mention that, at best, you're looking at needing 16 x 600M = 6.4B bytes, or 6.4gigabytes, even without any overhead.)
------
We are the carpenters and bricklayers of the Information Age.
Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose
I shouldn't have to say this, but any code, unless otherwise stated, is untested
In reply to Re: Maintaining uniqueness across millions of items
by dragonchild
in thread Maintaining uniqueness across millions of items
by sdbarker
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |