Maybe a 64/32-bit difference? But that works out marginally slower on my machine:
#! perl -slw use strict; use Benchmark qw[ cmpthese ]; cmpthese -1, { one_eval => q[ my $t = 1; eval qq[ sub count { \$_[0] =~ tr[$t][$t] } ]; my $c = 0; $c += count( $_ ) for 1 .. 1e6; print $c; ], many_evals => q[ my $t = 1; my $c = 0; $c += eval qq[ \$_ =~ tr[$t][$t] ] for 1 .. 1e6; print $c; ], loop => q[ my $t = 1; my $c = 0; for my $n ( 1 .. 1e6 ) { ++$c while $n =~ m[$t]g; } print $c; ], loop2 => q[ my $t = 1; my $c = 0; for my $n ( 1 .. 1e6 ) { $c += () = $n =~ m[$t]g; } print $c; ], }; __END__ C:\test>junk30 600001 600001 600001 (warning: too few iterations for a reliable count) 600001 600001 600001 (warning: too few iterations for a reliable count) 600001 600001 (warning: too few iterations for a reliable count) 600001 Subroutine count redefined at (eval 2000028) line 1. 600001 Subroutine count redefined at (eval 2000029) line 1. 600001 Subroutine count redefined at (eval 2000032) line 1. 600001 (warning: too few iterations for a reliable count) Rate many_evals loop2 loop one_eval many_evals 3.68e-002/s -- -97% -97% -98% loop2 1.06/s 2777% -- -7% -57% loop 1.14/s 3006% 8% -- -53% one_eval 2.44/s 6515% 130% 113% --
Another reason for not using it, is that it gets exponentially slower as the number of hits increases, due to the need to allocate large lists which then get discarded:
perl -MTime::HiRes=time -E"$m=1e5;$x='x'x$m; $t=time; $n=()=$x=~m[x]g; say +(time()-$t)/$m" 1.44867897033691e-006 perl -MTime::HiRes=time -E"$m=1e6;$x='x'x$m; $t=time; $n=()=$x=~m[x]g; say +(time()-$t)/$m" 1.46089999675751e-005
In reply to Re^3: Returning transliteration from eval
by BrowserUk
in thread Returning transliteration from eval
by albert
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |