in reply to Removing duplicates in large files

Well really I don't think 120,00 is all that big. I recently wrote a script to parse out duplicates on our backup tapes. I clocked the number of cycles and it pushed near on 9 million in about 30 secs (from memory). And I'm not on on a ripper of a machine. Your 120,000 it should rip through relatively quickly.

Putting speed aside now I'm more interested in how you are weeding out duplicates. Using hashes will be faster. Use the email address as the key to the hash and just test if the key already exists. If it does then you know you have a duplicate and output to a seperate file or just ignore it completely.

I'll assume you know how to read in your emails into an array. Just do a foreach on the array and put each into the hash and then output any that already exists. For example below see below.

Enjoy!
Dean
foreach (@array) { if (! $hash{$_}) { $hash{$_} = $_; } else { ## duplicates output here or ignore print $_; } } # then print your hash to get your none duplicate results

Replies are listed 'Best First'.
Re: Re: Removing duplicates in large files
by stvn (Monsignor) on Jan 31, 2004 at 01:39 UTC

    Assuming too that all your emails were in an array, it is even simpler to let perl do most of the work:

    %hash_of_unique_emails = map { $_ => 1 } @array_of_emails
    Or even better, as a one liner from the shell:
    perl -e 'print keys %{ { map { $_ => 1 } <> } }' < test.data
    That should work on windows too (worked fine on my OS X machine, but alas, no windows box to test it on here).

    -stvn

    update intially forgot the map {} in the first example ,.. sorry been a long day,..