My question wasn't very directional... I was wanting to know the relative performance of a blessed code ref vs a unblessed code ref
So I put together a little benchmark comparing:
- calling a code ref directly
- blessing a code ref and calling it directly
- blessing a code ref and calling another method in the object
Blessing the code ref does have a performance impact when using it as a code ref compared to an unblessed code ref. However it is cheap compared to calling a method on the object
use strict;
use warnings;
use feature ":all";
use Benchmark qw<cmpthese>;
my $count=$ARGV[0]//1;
package SUB_TEST {
sub new { bless sub {1}, __PACKAGE__}
sub method { 1 }
}
my $obj=SUB_TEST->new();
my $sub=sub {1};
cmpthese($count,
{
blessed=>sub {$obj->()},
method=>sub{$obj->method()},
sub=>sub{$sub->()}
}
);
Output:
Rate method blessed sub
method 18939394/s -- -30% -39%
blessed 27173913/s 43% -- -12%
sub 30864198/s 63% 14% --
| [reply] [Watch: Dir/Any] [d/l] [select] |
Benchmarking is always tricky: providing subrefs to benchmark makes it easier to get the code right, but if the target code is very fast the sub call overhead can swamp the results. Providing strings instead gets rid of that overhead, but is far harder to get right because the code will be evalled in a different context.
Using a string version of the benchmark, I confirm your results on my system perl (5.28) but the difference almost entirely disappears with 5.34 (which is also quite a bit faster overall):
% perl benchmark
Rate methodic blessy direct
methodic 16017006/s -- -31% -40%
blessy 23189661/s 45% -- -13%
direct 26673243/s 67% 15% --
% /opt/v5.34.0/bin/perl benchmark
Rate methodic blessy direct
methodic 21275455/s -- -39% -39%
blessy 34927866/s 64% -- -0%
direct 35023414/s 65% 0% --
% cat benchmark
use strict;
use warnings;
use Benchmark;
our($mcount, $bcount, $dcount) = (0) x 3;
package Methodic {
sub new { return bless {} }
sub method { ++$::mcount }
};
package Blessy {
sub new { return bless sub { ++$::bcount } }
};
sub direct { ++$::dcount }
our $methodic = Methodic->new;
our $blessy = Blessy->new;
our $direct = \&direct;
Benchmark::cmpthese(-1, {
methodic => q{$::methodic->method()},
blessy => q{$::blessy->()},
direct => q{$::direct->()},
});
%
Note also that if you look at the counts after the benchmark has run, you'll see larger number than those that were reported. IIRC this is because Benchmark tries to calculate the overhead of calling and adjust results for it, and should not be taken as a sign that it can't count. :) | [reply] [Watch: Dir/Any] [d/l] |
Thanks for the reply and benchmark insights.
What OS are you running?
Running your benchmark on my laptop with macOS 12.1 with perl 5.34 I get similar percentage differences to my original benchmark code:
macOS 12.1, your benchmark code:
Rate methodic blessy direct
methodic 16445180/s -- -27% -35%
blessy 22503930/s 37% -- -11%
direct 25252404/s 54% 12% --
macOS 12.1, rerun of my original benchmark code:
Rate method blessed sub
method 17857143/s -- -35% -44%
blessed 27397260/s 53% -- -14%
sub 31746032/s 78% 16% --
| [reply] [Watch: Dir/Any] [d/l] |
| [reply] [Watch: Dir/Any] |