You're suggesting doing it the wrong way around. If the second file contains just the IDs, say 8 chars per, then by loading the second file into the hash, if every single one of the 300,000 ids in the first (larger) file, was also in the second file, then the hash would only require 13,899,476 bytes of ram. (That's on a 64-bit OS, probably much less on a 32-bit.).
The OP can then proocess the 'first' file line by line and print them if the ID exists within the hash constructed from the second file.
Total processing time required: ~23 seconds.
And that's a damn sight faster than you could load the larger file into the DB, and approx 1% of the time required for a full RDBMS (pgsql) to perform the query for just 2000 ids.
(R)DBs are a sledge-hammer for this nut...as with so many others for which they are routinely prescribed. If I had a hammer...I wouldn't bother to think!
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|