Now processing the input, you could use a sliding window to process big chunks (like 100MB), but I don't think it'll make a big difference from reading line by line with readline (since main limitation is hard-disk speed here and Perl and OS are already reading in big chunks behind the scene).
But I would certainly group the write operations, like processing n = 1 million lines before writing out. Collect the entries in a hash of arrays push @{$hash{$cluster}}, $entry and append them to the temporary cluster files ( open has an append mode with '>>' ). Then empty the hash to avoid memory problems and process the next n lines.
NB: In case the entries have to be unique within a cluster (you haven't been precise about that) you'd need a hash of hashes and a more complicated approach.
HTH!
Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery
FootballPerl is like chess, only without the dice
*) I'm not sure about the most efficient way OS-wise to merge large files, but google or the monastery should know. I'm critical about this obsession of you bio-guys of creating huge files. I'd rather have data separated into several smaller files and zipped them together.
In reply to Re: Processing while reading in input
by LanX
in thread Processing while reading in input
by onlyIDleft
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |