If it is a relatively small number, you could read them into a hash.
If it's a big number, you could read them into a disk-based hash like BerkeleyDB. It's a lot slower than an in-memory hash, of course, but it would make the code pretty easy to write.
If it was me, though, I'd probably use a database.
In reply to Re^2: 15 billion row text file and row deletes - Best Practice?
by friedo
in thread 15 billion row text file and row deletes - Best Practice?
by awohld
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |