Theo, this solution, if I'm parsing it correctly, assumes that all duplicate records are 'stacked' together... ie, all 1111 lines occur one after another, correct? I pretty much discarded that pattern as soon as I saw it, since rarely do the problem sets order themselves up that conveniently. ;)
If that pattern is guaranteed, then yes, the hash is a waste of memory. If it's not, then your solution... isn't. Personally, I'd rather eat the memory usage and feel comfortable knowing I didn't have to rely on my input to match my expectations, which are fraught with danger and ignorance most days.
If the memory usage of the hash is that problematic (and let's be honest... when folks say "I only had X amount of memory to use!" it's not because that was all they *wanted* to use, now was it? :) , then stash results off in some other manner (DBI leaps to mind) and read in a limited set of lines at a time.
Comment on Re: Re: Separate duplicate and unique records
Theo, this solution, if I'm parsing it correctly, assumes that all duplicate records are 'stacked' together... ie, all 1111 lines occur one after another, correct? I pretty much discarded that pattern as soon as I saw it, since rarely do the problem sets order themselves up that conveniently. ;)
He said in the specification that the input file was already sorted.