note
Zed_Lopez
<p>If I've understood you right, this should do it:</p>
<code>
my %h;
# build a giant hash of all the info. Keys are ids, values
# are hashrefs whose keys are the source filename and whose
# values are the lines themselves.
while (<>) {
my @fields = split ',';
$h{$fields[0]}{$ARGV} = $_;
}
# for each id (lexically sorted)
for my $id (sort keys %h) {
my @keys = keys %{$h{$id}};
# if it was present in only one file, print it and move on
if (scalar @keys == 1) {
print $h{$id}{$keys[0]};
next;
}
# if it was present in more than one, find out whether
# all the lines are the same by building a hash with
# each line as the key, then testing whether you end
# up with more than one key.
my %cmp;
$cmp{$_} = '' for values %{$h{$id}};
print keys %cmp if scalar keys %cmp == 1;
}
</code>
<p><b>Updated</b>: Now I feel silly. This can be much simpler.</p>
<code>
while (<>) {
my @fields = split ',';
$h{$fields[0]}{$_} = '';
}
for my $id (sort keys %h) {
print keys %{$h{$id}} if scalar keys %{$h{$id}} == 1;
}
</code>
<p>and, if one really wanted, the for loop could even be the gratuitously uber-terse:</p>
<code>
scalar keys %{$h{$_}} == 1 and print keys %{$h{$_}} for sort keys %h;
</code>
<p>I love Perl.</p>
<p><b>Updated again:</b> You know how it goes. You start thinking about how something can be terser, and next thing you know, you're golfing.</p>
<code>
perl -ane '$h{$F[0]}{$_}=0;END{keys%{$h{$_}}==1&&print keys%{$h{$_}}for sort keys%h}' f1.txt f2.txt
</code>
<p>OK. I stop procrastinating now.</p>
498614
498614