in reply to Help for finding duplicates in huge files
A simple variation of a merge sort would do the trick. I did a node a while back with code to do it. You'd just need to change the part where it merges to compare and dump the value in the appropriate output files. A brief (untested) example:
#!/usr/bin/perl use strict; use warnings; my ($infile1, $infile2) = ('abc', 'def'); my ($outfile1, $outfile2, $outfile3) = ('out.both', 'out.1', 'out.2'); open my $IF1, "sort -k3 $infile1 |' or die "opening $infile1: $!"; open my $IF2, 'sort -k3 $infile2 |' or die "opening $infile2: $!"; open my $OF1, '>', $outfile1 or die "opening $file1: $!"; open my $OF1, '>', $outfile2 or die "opening $file2: $!"; open my $OF1, '>', $outfile3 or die "opening $file3: $!"; my $f1 = <$IF1>; my $f2 = <$IF2>; while (defined($f1) or defined($f2)) { if ($f1 eq $f2) { print $OF1 $f1; $f1 = <$IF1>; $f2 = <$IF2>; } elsif ($f1 lt $f2) { print $OF2 $f1; $f1 = <$IF1>; } else { print $OF3 $f2; $f2 = <$IF2>; } }
Also, if you plan on doing many operations like this, you may just want to dump your data in a database, and let the database do the work.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
|
|---|