First, I'd do the split as "split(/\s+/,$list_of_num)". \s is 'whitespace' which includes ' ', \t (tab), \f (form feed), \r (carrage return) and, \n (newline).
If you don't need the original inner lists in the form they were read in, why not make them a hash first? this would save you a traversal of the list to make a hash of it for the uniqe-ing part.
This is essentially the same idea as others have posted, execpt this keeps track of which sets each duplicate was found in.use strict; use Data::Dumper; my @input = ("1 2 3", "5 5 6", "1 4 6" ); # outer map: take each input line # inner map: split it on white space and return it as a hash ref my @array = map { { map { $_ => 1 } split(/\s+/,$_) } } @input; print Dumper @array; my %unique; for (my $i=0; $i<=$#array; $i++) { foreach (keys %{$array[$i]}) { $unique{$_}->{'pos'}->{$i} = 1; $unique{$_}->{'count'}++; } } my @duplicates = grep { $unique{$_}->{'count'} > 1 } keys %unique; foreach (@duplicates) { print "DUP: $_ In lists: "; print join(",",keys %{$unique{$_}->{'pos'}}); print "\n"; }
/\/\averick
In reply to Re: Comparing between multiple sets of data
by maverick
in thread Comparing between multiple sets of data
by flounder
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |