in reply to Comparing lines of multiple files
If I've understood you right, this should do it:
my %h; # build a giant hash of all the info. Keys are ids, values # are hashrefs whose keys are the source filename and whose # values are the lines themselves. while (<>) { my @fields = split ','; $h{$fields[0]}{$ARGV} = $_; } # for each id (lexically sorted) for my $id (sort keys %h) { my @keys = keys %{$h{$id}}; # if it was present in only one file, print it and move on if (scalar @keys == 1) { print $h{$id}{$keys[0]}; next; } # if it was present in more than one, find out whether # all the lines are the same by building a hash with # each line as the key, then testing whether you end # up with more than one key. my %cmp; $cmp{$_} = '' for values %{$h{$id}}; print keys %cmp if scalar keys %cmp == 1; }
Updated: Now I feel silly. This can be much simpler.
while (<>) { my @fields = split ','; $h{$fields[0]}{$_} = ''; } for my $id (sort keys %h) { print keys %{$h{$id}} if scalar keys %{$h{$id}} == 1; }
and, if one really wanted, the for loop could even be the gratuitously uber-terse:
scalar keys %{$h{$_}} == 1 and print keys %{$h{$_}} for sort keys %h;
I love Perl.
Updated again: You know how it goes. You start thinking about how something can be terser, and next thing you know, you're golfing.
perl -ane '$h{$F[0]}{$_}=0;END{keys%{$h{$_}}==1&&print keys%{$h{$_}}fo +r sort keys%h}' f1.txt f2.txt
OK. I stop procrastinating now.
In Section
Seekers of Perl Wisdom