Abigail, I like your logic and agree it should be more efficient with large step sizes, but it doesn't appear to be faster. I've tried the following step sizes on an array of 1000000 elements: 5, 10, 100, 100, etc, 100000, 200000, 300000 & 500000. I bencmarked the results. The output I have included is with the step size of 500000 but is a fair summary of the results I saw:
Benchmark: running Step 1, Step 2, Step 3 for at least 10 CPU seconds.
+..
Step 1: 10 wallclock secs (10.47 usr + 0.02 sys = 10.49 CPU) @ 18
+5755.48/s (n=1948575)
Step 2: 11 wallclock secs (10.48 usr + 0.02 sys = 10.50 CPU) @ 18
+7398.10/s (n=1967680)
Step 3: 10 wallclock secs (10.01 usr + 0.00 sys = 10.01 CPU) @ 31
+5978.92/s (n=3162949)
Here's the code:
#!/usr/bin/perl -w
use strict;
use Benchmark;
sub step1 {
my $step = shift;
@_[map {$_ * $step} 0..($#_ / $step)]; # curious what effect the bac
+kets would have
}
sub step2 {
my $step = shift;
@_[map {$_ * $step} 0..$#_ / $step];
}
sub step3 {
my $step = shift;
return map { $_[$_] } grep { $_ % $step == 0 } 0..$#_;
}
my @array = (0..1000000);
timethese(-10, {
'Step 1' => 'step1(500000, @array)',
'Step 2' => 'step2(500000, @array)',
'Step 3' => 'step3(500000, @array)'
});
The only reason I post this is I'm curoius as to why what would appear to be a quicker and more effient piece of code is consistently the slowest (although not by much). |