I thought that it might, for a small set of inputs. The overhead of creating and populating the hash, as well as assignments to two new lexicals would only be overcome with a significantly larger data set.
I wonder, though. Yep. Changing it to this:
{
my %cache;
sub memoized {
( $cache{$a} ||= substr($a, -(19+20), 19) )
cmp ( $cache{$b} ||= substr($b, -(19+20), 19) );
}
}
gives a slight increase in speed (but my benchmarks are varying wildly right now. Maintaining ranking, but % differences are all over the board.)
Rate st memoized naive grt
st 10281/s -- -6% -51% -54%
memoized 10930/s 6% -- -48% -51%
naive 20942/s 104% 92% -- -5%
grt 22143/s 115% 103% 6% --
$,=' ';$\=',';$_=[qw,Just another Perl hacker,];print@$_;
|