in reply to Re: 15 billion row text file and row deletes - Best Practice?
in thread 15 billion row text file and row deletes - Best Practice?
If it is a relatively small number, you could read them into a hash.
If it's a big number, you could read them into a disk-based hash like BerkeleyDB. It's a lot slower than an in-memory hash, of course, but it would make the code pretty easy to write.
If it was me, though, I'd probably use a database.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: 15 billion row text file and row deletes - Best Practice?
by bobf (Monsignor) on Dec 01, 2006 at 05:35 UTC | |
|
Re^3: 15 billion row text file and row deletes - Best Practice?
by awohld (Hermit) on Dec 01, 2006 at 05:29 UTC | |
by davido (Cardinal) on Dec 01, 2006 at 06:03 UTC | |
by jhourcle (Prior) on Dec 01, 2006 at 15:14 UTC | |
by djp (Hermit) on Dec 04, 2006 at 02:36 UTC |