OK, a couple of errors on my part... (mea culpa)
However, as it looks after more judicious investigation, the results are highly
data dependent. So what did I do? First, the code (cleaned up, and
with GrandFather's @values added):
use strict;
use warnings;
use Benchmark qw(cmpthese);
use List::MoreUtils;
my @data; # AB
for ( 1 .. 1e4 ) {
push @data, int( rand 1e6 );
}
my @lines; # BU
for ( 1 .. 1e3 ) {
my $line = int( rand 1e6 );
$line .= chr(9) . int( rand 1e6 ) while length( $line ) < 4096;
push @lines, $line;
}
my @values = map {int rand 10} 1 .. 1000; # GF
my $data;
$data = \@data;
#$data = \@lines;
#$data = \@values;
sub uniq1 { # copied from List::MoreUtils
my %h;
map { $h{$_}++ == 0 ? $_ : () } @_;
}
sub uniq2 {
my %h;
grep { $h{$_}++ == 0 } @_;
}
sub uniq3 { # OP
my %h;
grep {$h{$_} = undef} @_;
keys %h;
}
sub uniq4 { # BrowserUk
my %h;
undef @h{ @_ };
keys %h;
}
cmpthese(-1, {
'uniqM' => sub { my @uniq = List::MoreUtils::uniq(@$data) },
'uniq1' => sub { my @uniq = uniq1(@$data) },
'uniq2' => sub { my @uniq = uniq2(@$data) },
'uniq3' => sub { my @uniq = uniq3(@$data) },
'uniq4' => sub { my @uniq = uniq4(@$data) },
});
I first started with my input data ("AB", an adapted/simplified
version of BrowserUk's random input generator), and got the following
results:
Rate uniq1 uniqM uniq2 uniq3 uniq4
uniq1 35.2/s -- -1% -5% -15% -21%
uniqM 35.5/s 1% -- -4% -14% -20%
uniq2 36.9/s 5% 4% -- -11% -17%
uniq3 41.5/s 18% 17% 13% -- -7%
uniq4 44.7/s 27% 26% 21% 8% --
From this I had concluded (prematurely) that there is virtually no
difference between "uniq1" and "uniqM" (the XS implementation), so I
commented out the latter benchmark (my error 1).
Then, after having played around a bit, I had settled on the
following results (which is where the reported ~40% for Perl 5.10.0
came from):
Rate uniq2 uniq1 uniq3 uniq4
uniq2 34.2/s -- -4% -30% -30%
uniq1 35.5/s 4% -- -28% -28%
uniq3 49.1/s 43% 38% -- 0%
uniq4 49.1/s 43% 38% 0% --
The thing I had overlooked (error 2) is, that my $data pointer
was still referring to BrowserUk's data ("BU"), which I had been
playing around in between. So, those results are in fact for rather
unusual input, i.e. 1000 values of around 4K each...
The full set with the BU data is, btw:
Rate uniqM uniq2 uniq1 uniq3 uniq4
uniqM 24.8/s -- -28% -30% -50% -50%
uniq2 34.2/s 38% -- -4% -31% -31%
uniq1 35.5/s 43% 4% -- -28% -28%
uniq3 49.5/s 100% 45% 39% -- 0%
uniq4 49.5/s 100% 45% 39% 0% --
which shows that, for large strings (probably all of them being unique), the XS variant is clearly the slowest (!)
With GrandFather's input data, OTOH, I do get similar results:
Rate uniq1 uniq2 uniq3 uniq4 uniqM
uniq1 3445/s -- -18% -27% -77% -80%
uniq2 4213/s 22% -- -10% -72% -75%
uniq3 4696/s 36% 11% -- -69% -72%
uniq4 15175/s 340% 260% 223% -- -10%
uniqM 16905/s 391% 301% 260% 11% --
Overall, BrowserUk's uniq() seems to be the winner. In other words, the findings essentially remain the same — with my original data (which isn't all that unrealistic). But there is huge variation depending on the type of input.
Moral of the story: thou shalt not be lazy and not disclose your
benchmark code (telling myself) ;(
|