in reply to Re: to strict or not to strict
in thread to strict or not to strict

...using strict creates overhead although it's not an optimization to write home about.

I assume you're referring to the runtime check for symbolic references, as that is the only run-time component for stricture?

$ perl -MO=Deparse -e '{use strict; my $a = 1}' { use strict 'refs'; my $a = 1; }

Well, I've taken the liberty of benchmarking the use of strict refs, and I must say it's a meager optimization indeed. I ran this code on a (new) completely unloaded machine: it would seem to me that the difference is noticable, but seems to be drowned out by noise almost completely.

use Benchmark qw(:all); cmpthese( 10000000, { with => sub {use strict 'refs'; my $a = 1; my $b = 2}, without => sub {my $a = 1; my $b = 2}, } ); __END__ Rate with without with 3378378/s -- -7% without 3623188/s 7% -- Rate without with without 3802281/s -- -3% with 3937008/s 4% -- Rate with without with 3412969/s -- -15% without 4032258/s 18% -- Rate with without with 4048583/s -- -3% without 4166667/s 3% -- Rate with without with 3460208/s -- -9% without 3787879/s 9% --

It would seem to me that if you need this type of optimization, you'd better start coding parts of your program in C ;-). So I wouldn't remove "use strict" from a production script for this reason: the likelyhood of a "quick hack" in the future on that production script messing up things without warning, would be just too great for me, from a sysadmin point of view.

Liz

Replies are listed 'Best First'.
Re: Re: Re: to strict or not to strict
by tachyon (Chancellor) on Oct 17, 2003 at 09:16 UTC

    You benchmark only ever loads strict once making it marginally valid. If you start to factor in load times.....

    use Benchmark 'cmpthese'; my %hash; cmpthese( 10000000, { with => sub {delete $INC{strict.pm}; use strict 'refs'; my $a = 1; +my $b = 2}, without => sub {delete $hash{key}; my $a = 1; my $b = 2}, } ); __DATA__ Benchmark: timing 10000000 iterations of with, without... with: 8 wallclock secs ( 9.10 usr + 0.00 sys = 9.10 CPU) @ 10 +98538.94/s (n=10000000) without: 3 wallclock secs ( 4.04 usr + 0.00 sys = 4.04 CPU) @ 24 +78314.75/s (n=10000000) Rate with without with 1098539/s -- -56% without 2478315/s 126% --

    You need to delete an imaginary hash key in the without to be fair. This a more realistic benchmark outside of persistent processes. Nonetheless the contribution of strict to the total overheads is trivial and I always use it.

    cheers

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

      Well, yes, that's what I meant when I said "I assume you're referring to the runtime check for symbolic references, as that is the only run-time component for stricture?" ;-) Personally, I always forget about module loading times, because I do most of my Perl work with mod_perl, with all modules loaded at server startup time.

      But I am pleasantly surprised by the low overhead of loading strict: being able to load strict more than 1 million times per second is nice. But on the other hand, it indicates to me that the OS has the file in RAM already and is serving it from there. So in that sense the benchmark is also flawed ;-).

      Anyway, I think we've proven there is not a lot to be gained by not using strict from a performance point of view. And that there is a lot to be lost from a development / maintenance point of view (if you don't use strict).

      Liz

        But on the other hand, it indicates to me that the OS has the file in RAM already and is serving it from there. So in that sense the benchmark is also flawed ;-).

        Yes, and no. If you have a situation where the OS hasn't cached 'strict.pm', it means 'strict.pm' isn't used relatively often. But the less something is used, the less it's a bottleneck, and the less impact higher load times have.

        So, you are right, but in situations where you are right, the outcome of the Benchmark isn't that important.

        Abigail

      This is completely OT as far as this thread is concerned, and is totally meaningless (probably) anyway, but I'd dearly like an explaination anyway.

      Reading tachyon's modified benchmark it struck me that if purely the load time was of interest, then the simple expedient of deleting a non-existant hash key probably didn't quite cut it when it came to removing the "administrative overhead" of use from the equation, so I modified it slightly to create and delete a key each time to see the effect that had.

      The results are surprising, and somewhat confusing.

      Pre-tye code

      The biggy!! Post-tye correction.

      use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => sub { delete $INC{'strict.pm'}; require 'strict.pm'; strict::import( 'refs' ); my $a = 1; my $b = 2 }, without => sub { $hash{key} = delete $hash{key}; my $a = 1; my $b = 2 }, } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 137/s -- -99% without 24226/s 17576% --

      Which apart from the fact that tachyon's machine is about 20x quicker than mine, is a mildly interesting result. However, I'm not sure that I'm really comparing eggs with eggs, so I had another go.

      This time, I thought I would force Benchmark to re-compile (eval) both snippets each time it exercised them, rather than just call a pre-compiled sub and I got these results.

      Pre-tye code

      Post-tye corrected benchmark

      use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -10, { with => q[ BEGIN{ delete $INC{'strict.pm'}; use strict 'refs' } my $a = 1; my $b = 2 ], without => q[ BEGIN{ $hash{'key'} = delete $hash{key} } my $a = 1; my $b = 2 ], } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 368780/s -- -2% without 375558/s 2% -- P:\test>..\bin\perl.exe junk.pl8 Rate with without with 371638/s -- -1% without 374367/s 1% --

      Which, without drawing any conclusions as I am still not particularly certain that I am really comparing eggs with eggs, is much closer to my real world experience of benchmarking complete programs with and without use strict.

      Whenever I tried this before, I nearly always found that the differences were marginal, if detectable.

      Did any of my attempts get closer to a real world test?


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      Hooray!

        $INC{strict.pm}

        That is so ironic that I nearly laughed out loud. Perhaps you should add "use strict" to your benchmark!

                        - tye

        ...machine is about 20x quicker than mine :-)

        The machine that ran on was a 1.4GHz Athlon, with IDE disks. Out of interest I ran it on one of the devel servers (Quad PIII Xeon 2GHz, RAID 5 SCSI really fast disks) Of course you only get to use one processor but there was a negligible performance difference.

        [root@devel3 root]# perl test.pl Benchmark: timing 10000000 iterations of with, without... with: 9 wallclock secs ( 8.64 usr + 0.00 sys = 8.64 CPU) @ 11 +57407.41/s (n=10000000) without: 4 wallclock secs ( 4.05 usr + 0.00 sys = 4.05 CPU) @ 24 +69135.80/s (n=10000000) Rate with without with 1157407/s -- -53% without 2469136/s 113% --

        What did you run it on out of interest?

        On the Benchmark side the BEGIN case have identical (and much faster results) because Perl only executes them once and then effectively does a NOP on that code section. At least that is what I suspect....

        cheers

        tachyon

        s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

        Which apart from the fact that tachyon's machine is about 20x quicker than mine, is a mildly interesting result.
        I guess I fail to see how this is even mildly interesting. The rate of loading on your box appears to be 7,281 loads per second. For something that happens only once in a program, can that even be considered any kind of performance hit? I think not.