go ahead... be a heretic | |
PerlMonks |
Re^2: 15 billion row text file and row deletes - Best Practice?by jynx (Priest) |
on Dec 01, 2006 at 22:44 UTC ( [id://587324]=note: print w/replies, xml ) | Need Help?? |
Would it be faster to do relative seeks? I'm just thinking that toward the end of the file you're seeking across 300+GBs of data every time you delete a single line. would work? Just a thought... Edit: While it's obvious, i think it's also worth noting that if each record is the same size (which seems likely) then it can be done in one pass instead of two (or at least, one pass for each chunk of kill file, depending on kill file size).
In Section
Seekers of Perl Wisdom
|
|