Well really I don't think 120,00 is all that big. I recently wrote a script to parse out duplicates on our backup tapes. I clocked the number of cycles and it pushed near on 9 million in about 30 secs (from memory). And I'm not on on a ripper of a machine. Your 120,000 it should rip through relatively quickly.
Putting speed aside now I'm more interested in how you are weeding out duplicates. Using hashes will be faster. Use the email address as the key to the hash and just test if the key already exists. If it does then you know you have a duplicate and output to a seperate file or just ignore it completely.
I'll assume you know how to read in your emails into an array. Just do a foreach on the array and put each into the hash and then output any that already exists. For example below see below.
Enjoy!
Dean
foreach (@array) {
if (! $hash{$_}) {
$hash{$_} = $_;
} else {
## duplicates output here or ignore
print $_;
}
}
# then print your hash to get your none duplicate results