If you have a solution that works for two files, all you have to do is apply that solution to a third file and the common lines of the first two, and so on until you have no more files.
Once you understand what $seen{$_} .= @ARGV does, it shouldn't be too hard to extend it to more than two files. It shouldn't be too hard if you look at the output of the following command:
> perl -wnle "print qq(Files remaining ) . @ARGV; print $seen{$_} .= @ +ARGV; " file1 file2 file3
If your lines are not unique within a file, you'll have to define what should happen.
Personally, I think a better approach would be to keep a list of common lines and reduce that list for each file, or maybe just use the uniq tool. But depending on your needs, the approach might need to be different, for example if the order of lines is important.
My approach would be (see perlfaq4 for finding the intersection of two arrays):
#!perl -w use strict; my %seen; sub intersect { # see perlfaq4 }; my $first_file = shift @ARGV; my @common = read_file($first_file); for (@ARGV) { @common = intersect( \@common, [ read_file $_ ] ); }; print "$_\n" for @common;
In reply to Re: find common lines in many files
by Corion
in thread find common lines in many files
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |