in reply to 15 billion row text file and row deletes - Best Practice?
One alternative way to look at the problem requires co-operation from the downstream processes, and avoids having to create a copy of the file.
The idea is to open the file for read/write, and instead of deleting the record (which really means "not writing it to the new file") you seek to the beginning of the record and replace the serial number with dashes or tildes, or whatever it takes for a downstream process to ignore it.
This is akin to how DOS deleted files: it simply replaced the first character of the filename by a twiddly character, and the directory read-first/read-next system calls knew to ignore them when asked to return the files in a directory.
If you have relatively few serial numbers to delete you can store them in a hash (to the extent that "relatively few" fits comfortably in 2Gb of space).
If you have more have more than that, you may find that assembling them all into a regexp (with Regexp::Assemble) may yield a tractable pattern. The economy of sharing the common prefixes of many serial numbers means that the pattern won't be as big as you'd think. Then you want to see if the line matches the regexp, rather than seeing if the field exists in the hash.
• another intruder with the mooring in the heart of the Perl
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: 15 billion row text file and row deletes - Best Practice?
by DrHyde (Prior) on Dec 04, 2006 at 10:46 UTC |