in reply to Help for finding duplicates in huge files
I first made your two 1.5 million row files to test. The ID has a lenght of 38 characters and is guaranteed to be unique within each file. Some IDs however can be found in both files.
The following program reads both files and produces three arrays: @both (containing the IDs which were found in both files), @first and @second (IDs only found in one of both files).
On my Windows XP Lenovo X200s laptop running Perl 5.12 it takes 5 seconds to read the first file into the hash and 10 seconds to check the second file against the hash and putting the IDs in their respective arrays.use Modern::Perl; my %first_file; { open my $FIRST, '<', 'first.txt' or die $!; while (<$FIRST>) { chomp; $first_file{$_} = '' ; } } my (@both, @first, @second); { open my $SECOND, '<', 'second.txt' or die $!; while (<$SECOND>) { chomp; if (exists $first_file{$_}) { push @both, $_; delete $first_file{$_}; } else { push @second, $_; } } } @first = keys %first_file;
CountZero
A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James
|
|---|