in reply to Re: Re: Re: to strict or not to strict
in thread to strict or not to strict

This is completely OT as far as this thread is concerned, and is totally meaningless (probably) anyway, but I'd dearly like an explaination anyway.

Reading tachyon's modified benchmark it struck me that if purely the load time was of interest, then the simple expedient of deleting a non-existant hash key probably didn't quite cut it when it came to removing the "administrative overhead" of use from the equation, so I modified it slightly to create and delete a key each time to see the effect that had.

The results are surprising, and somewhat confusing.

Pre-tye code

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -10, { with => sub {delete $INC{strict.pm}; use strict 'refs'; my $ +a = 1; my $b = 2}, without => sub {$hash{key} = delete $hash{key}; my $a + = 1; my $b = 2}, } ); __END__ P:\test>P:\bin\perl.exe junk.pl8 Rate without with without 25587/s -- -79% with 121330/s 374% --

Post-tye correction

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => sub {delete $INC{'strict.pm'}; use strict 'refs'; my + $a = 1; my $b = 2}, without => sub {$hash{key} = delete $hash{key}; my +$a = 1; my $b = 2}, } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate without with without 24308/s -- -82% with 136483/s 461% --

Which seemed to indicate that somehow, deleting a real key from %INC, opening, reading, closing strict.pm, compiling it with all that entails, and then adding the key back to %INC was hugely quicker than simply deleting a key from lexical hash and putting it back again?

Then it dawned on me that use is a compile time directive, so it will only be executed once regardless of how many times the code is executed. So, swapping use for a require and a call to it's import() routine, (is that roughly equivalent?) I got these results.

Pre-tye code

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => sub { delete $INC{strict.pm}; require 'strict.pm'; strict::import( 'refs' ); my $a = 1; my $b = 2 }, without => sub { $hash{key} = delete $hash{key}; my $a = 1; my $b = 2 }, } ); __END__ P:\test>P:\bin\perl.exe junk.pl8 Rate with without with 7281/s -- -71% without 24851/s 241% --

The biggy!! Post-tye correction.

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => sub { delete $INC{'strict.pm'}; require 'strict.pm'; strict::import( 'refs' ); my $a = 1; my $b = 2 }, without => sub { $hash{key} = delete $hash{key}; my $a = 1; my $b = 2 }, } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 137/s -- -99% without 24226/s 17576% --

Which apart from the fact that tachyon's machine is about 20x quicker than mine, is a mildly interesting result. However, I'm not sure that I'm really comparing eggs with eggs, so I had another go.

This time, I thought I would force Benchmark to re-compile (eval) both snippets each time it exercised them, rather than just call a pre-compiled sub and I got these results.

Pre-tye code

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => q[ BEGIN{ delete $INC{strict.pm}; use strict 'refs' } my $a = 1; my $b = 2 ], without => q[ BEGIN{ $hash{key} = delete $hash{key} } my $a = 1; my $b = 2 ], } ); __END__ P:\test>P:\bin\perl.exe junk.pl8 Rate without with without 374581/s -- -0% with 375658/s 0% --

Post-tye corrected benchmark

use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -10, { with => q[ BEGIN{ delete $INC{'strict.pm'}; use strict 'refs' } my $a = 1; my $b = 2 ], without => q[ BEGIN{ $hash{'key'} = delete $hash{key} } my $a = 1; my $b = 2 ], } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 368780/s -- -2% without 375558/s 2% -- P:\test>..\bin\perl.exe junk.pl8 Rate with without with 371638/s -- -1% without 374367/s 1% --

Which, without drawing any conclusions as I am still not particularly certain that I am really comparing eggs with eggs, is much closer to my real world experience of benchmarking complete programs with and without use strict.

Whenever I tried this before, I nearly always found that the differences were marginal, if detectable.

Did any of my attempts get closer to a real world test?


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
Hooray!

Replies are listed 'Best First'.
Re^5: to strict or not to strict (use strict!)
by tye (Sage) on Oct 17, 2003 at 15:20 UTC
    $INC{strict.pm}

    That is so ironic that I nearly laughed out loud. Perhaps you should add "use strict" to your benchmark!

                    - tye

      My only defense Sir, I cannot tell a lie...it was him :)


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      Hooray!

Re: Re: Re: Re: Re: to strict or not to strict
by tachyon (Chancellor) on Oct 17, 2003 at 13:23 UTC

    ...machine is about 20x quicker than mine :-)

    The machine that ran on was a 1.4GHz Athlon, with IDE disks. Out of interest I ran it on one of the devel servers (Quad PIII Xeon 2GHz, RAID 5 SCSI really fast disks) Of course you only get to use one processor but there was a negligible performance difference.

    [root@devel3 root]# perl test.pl Benchmark: timing 10000000 iterations of with, without... with: 9 wallclock secs ( 8.64 usr + 0.00 sys = 8.64 CPU) @ 11 +57407.41/s (n=10000000) without: 4 wallclock secs ( 4.05 usr + 0.00 sys = 4.05 CPU) @ 24 +69135.80/s (n=10000000) Rate with without with 1157407/s -- -53% without 2469136/s 113% --

    What did you run it on out of interest?

    On the Benchmark side the BEGIN case have identical (and much faster results) because Perl only executes them once and then effectively does a NOP on that code section. At least that is what I suspect....

    cheers

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

      What did you run it on out of interest?

      233MHz P-II.

      On the Benchmark side the BEGIN case have identical (and much faster results) because Perl only executes them once and then effectively does a NOP on that code section. At least that is what I suspect....

      Given that the BEGIN{}s are wrapped in strings, the won't actually be seen by the compiler until eval time, which would mean that they are re-compiled each time the benchmark is iterated. Unless Benchmark wraps string code arguements in a subroutine wrapper and takes a coderef to the compiled code and invokes it via that for the second and subsequent runs, which is a possibility. I'll try to confirm or deny that.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      Hooray!

Re: Re: Re: Re: Re: to strict or not to strict
by Anonymous Monk on Oct 17, 2003 at 13:38 UTC
    Which apart from the fact that tachyon's machine is about 20x quicker than mine, is a mildly interesting result.
    I guess I fail to see how this is even mildly interesting. The rate of loading on your box appears to be 7,281 loads per second. For something that happens only once in a program, can that even be considered any kind of performance hit? I think not.

      One of the reasons load-time can be an issue, is for webservers that don't use mod_perl. Using modules that impose a large load-time hit can cause a measurable degradation if they are being loaded 100's of time a second as with a busy website cgi. That, as I understand it, is the very basis of mod_perls existance.

      What made it interesting was the very fact that, if the benchmark is valid, it shows that the runtime hit due to strict is slightly more than 1/10 th of a millisecond, which when compared to beasts like cgi and the various DBI::* modules is so miniscule as to be totally irrelevant, which I found mildly interesting in the context of this thread.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      Hooray!

        s so miniscule as to be totally irrelevant, which I found mildly interesting in the context of this thread.
        Ah, I misinterpreted your mild interest entirely, sorry about that.