in reply to Processing while reading in input

Update: The following reply deals with unsorted input. I didn't expect such a trivial case ...


Your main problem is generating the output file, I'd suggest generating one temporary file per cluster and merging them at the end *. Like this you just need to append to the temporary files and keep track of the clusters.

Now processing the input, you could use a sliding window to process big chunks (like 100MB), but I don't think it'll make a big difference from reading line by line with readline (since main limitation is hard-disk speed here and Perl and OS are already reading in big chunks behind the scene).

But I would certainly group the write operations, like processing n = 1 million lines before writing out. Collect the entries in a hash of arrays push @{$hash{$cluster}}, $entry and append them to the temporary cluster files ( open has an append mode with '>>' ). Then empty the hash to avoid memory problems and process the next n lines.

NB: In case the entries have to be unique within a cluster (you haven't been precise about that) you'd need a hash of hashes and a more complicated approach.

HTH!

Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery FootballPerl is like chess, only without the dice

*) I'm not sure about the most efficient way OS-wise to merge large files, but google or the monastery should know. I'm critical about this obsession of you bio-guys of creating huge files. I'd rather have data separated into several smaller files and zipped them together.