Please post code if you want comments on its efficiency. This is a short way that uses grep to find both unique and matching elements in two arrays.

Update

no_slogan points out that them embedded loops do not scale well as the slow gometrically, so I have added a hash solution. Just to be consistent this includes grep too :-)

my @a = (0,1,2,3,4,5, 6, 7, 8 ); my @b = (0,2,4,6,8,10,12,14,16); my (@matches,@mismatch); print "An iterative solution\n"; foreach $a (@a) { push @matches, $a if grep{$a == $_}@b; } foreach $a (@a) { push @mismatch, $a unless grep{$a == $_}@b; } print "The arrays share: @matches\n"; print "Unique elements in \@a: @mismatch\n"; print"\nA hash solution\n"; @matches = @mismatch = (); map{$seen{$_}++}@b; @matches = grep{defined $seen{$_}}@a; @mismatch = grep{!defined $seen{$_}}@a; print "The arrays share: @matches\n"; print "Unique elements in \@a: @mismatch\n";

Always interested to see other ways to do things, so please post your code. Note that although this code is short, in perl shorter is often slower, so it may be that your method is faster. If it is also shorter it looks like a new round of GOLF :-) If you post some code I will run a Benchmark or you can do it youself, see this node Benchmark made easy for how.

Update 2

I have benchmarked the code above. As predicted by no_slogan the results of the embedded loop structure are so embarasing I am going to hide them behind a readmore. Using a 10,000 record data set here is what happened....
use Benchmark; $build = <<'BUILD'; my @a = (0..10000); my @b = (9000..19000); BUILD $iterate = <<'ITERATE'; my @a = (0..10000); my @b = (9000..19000); my (@matches,@mismatch); @matches = @mismatch = (); foreach $a (@a) { push @matches, $a if grep{$a == $_}@b; } foreach $a (@a) { push @mismatch, $a unless grep{$a == $_}@b; } ITERATE $hash = <<'HASH'; my @a = (0..10000); my @b = (9000..19000); my (@matches,@mismatch); @matches = @mismatch = (); map{$seen{$_}++}@b; @matches = grep{defined $seen{$_}}@a; @mismatch = grep{!defined $seen{$_}}@a; HASH timethese(1,{ 'build' => $build, 'iterate' => $iterate, 'hash' => $hash }); __END__ C:\>perl test.pl Benchmark: timing 1 iterations of build, hash, iterate... build: -1 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (warning: too few iterations for a reliable count) hash: 0 wallclock secs ( 0.16 usr + 0.00 sys = 0.16 CPU) @ 6 +.25/s (n=1 ) (warning: too few iterations for a reliable count) iterate: 147 wallclock secs (147.20 usr + 0.00 sys = 147.20 CPU) @ + 0.01/s ( n=1) (warning: too few iterations for a reliable count) C:\>

Ouch! So iteration is about 1000 times slower than the hash method. And that's on a PIII 800 with no other load. Good lesson here methinks.

cheers

tachyon

s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print


In reply to Re: using hash uniqueness to find the odd man by tachyon
in thread using hash uniqueness to find the odd man by novitiate

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.