in reply to 15 billion row text file and row deletes - Best Practice?
This way, you make a copy of the text file (to STDOUT), and only print out those lines you want to keep.my %kill; @kill{'00020123837', '00020123839'} = (); while(<>) { my($serial) = split /,/; print unless exists $kill{$serial}; }
It'll only go through the file once, for the whole job, but you will need your extra disk space, as the copy will be almost as large as the original.
You will have to fill the hash with serials to kill, somehow — not necessarily the way I show here. You might read them from a database or other source file. The hash values are not important (they're undef here), only the keys matter.
If you want to make this a oneliner, think of using the switches -n (to loop through the input file without printing) and -i (to replace the original file with the output file when finished. Something like (using Unix quotes for the command line):
perl -n -i.orig -e 'BEGIN{@kill{"00020123837","00020123839"}=()} my($s +erial)=split/,/; print unless exists $kill{$serial}' myhugefile.txt
Swap the single and double quotes for on Windows.
|
|---|