in reply to to strict or not to strict

I only use strict and warnings in my code during development. (Academic note: Believe it or not, using strict creates overhead although it's not an optimization to write home about). I'm far from perfect. I know I can make spelling erors (<--see), not keep proper case, or just accidentally append a random string to the end of variable names due to my bashing my head against the keyboard. I may sometimes accidentally put $$array[0] instead of ${$array[0]}. Strict catches that (unless there's a $array, but I tend not to overlap variable names between list types and scalars). I consider not using strict during development to be synonymous with nailing two pieces of wood together in the dark. Sure, you can do it but the results may not be quite as pretty or as painless as they would be if you did it in a well-lit room.

...at least that's my opinion.

Updated: The reason I don't use strict in my production scripts is actually because I use a series of dev modules that set up my programming environment. These specify such nifty things as how die and warn should react as well as possibly setting up STDIN and %ENV for testing that particular script. I mentioned the overhead involved with using strict mostly as an academic interest. Why not list specifically use strict;use warnings; in my script? Because that is done from my dev module's import. I don't need the "optimization". I didn't mean to imply that I use it as an optimization. I don't comment out use strict;. use strict; is meant to help me program. So what difference does it make if it isn't present in production? When the scripts go into production, a blank dev module with only a version and a __DATA__ section with development notes is provided to give any future developers some hints as to how I created the software as well as development tests, the output, and any notes to go along with them. Sorry for the long update. I realized by the --'s on this post that I wasn't clear on what I originally meant to say.

antirice    
The first rule of Perl club is - use Perl
The
ith rule of Perl club is - follow rule i - 1 for i > 1

Replies are listed 'Best First'.
Re: Re: to strict or not to strict
by liz (Monsignor) on Oct 17, 2003 at 08:28 UTC
    ...using strict creates overhead although it's not an optimization to write home about.

    I assume you're referring to the runtime check for symbolic references, as that is the only run-time component for stricture?

    $ perl -MO=Deparse -e '{use strict; my $a = 1}' { use strict 'refs'; my $a = 1; }

    Well, I've taken the liberty of benchmarking the use of strict refs, and I must say it's a meager optimization indeed. I ran this code on a (new) completely unloaded machine: it would seem to me that the difference is noticable, but seems to be drowned out by noise almost completely.

    use Benchmark qw(:all); cmpthese( 10000000, { with => sub {use strict 'refs'; my $a = 1; my $b = 2}, without => sub {my $a = 1; my $b = 2}, } ); __END__ Rate with without with 3378378/s -- -7% without 3623188/s 7% -- Rate without with without 3802281/s -- -3% with 3937008/s 4% -- Rate with without with 3412969/s -- -15% without 4032258/s 18% -- Rate with without with 4048583/s -- -3% without 4166667/s 3% -- Rate with without with 3460208/s -- -9% without 3787879/s 9% --

    It would seem to me that if you need this type of optimization, you'd better start coding parts of your program in C ;-). So I wouldn't remove "use strict" from a production script for this reason: the likelyhood of a "quick hack" in the future on that production script messing up things without warning, would be just too great for me, from a sysadmin point of view.

    Liz

      You benchmark only ever loads strict once making it marginally valid. If you start to factor in load times.....

      use Benchmark 'cmpthese'; my %hash; cmpthese( 10000000, { with => sub {delete $INC{strict.pm}; use strict 'refs'; my $a = 1; +my $b = 2}, without => sub {delete $hash{key}; my $a = 1; my $b = 2}, } ); __DATA__ Benchmark: timing 10000000 iterations of with, without... with: 8 wallclock secs ( 9.10 usr + 0.00 sys = 9.10 CPU) @ 10 +98538.94/s (n=10000000) without: 3 wallclock secs ( 4.04 usr + 0.00 sys = 4.04 CPU) @ 24 +78314.75/s (n=10000000) Rate with without with 1098539/s -- -56% without 2478315/s 126% --

      You need to delete an imaginary hash key in the without to be fair. This a more realistic benchmark outside of persistent processes. Nonetheless the contribution of strict to the total overheads is trivial and I always use it.

      cheers

      tachyon

      s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

        Well, yes, that's what I meant when I said "I assume you're referring to the runtime check for symbolic references, as that is the only run-time component for stricture?" ;-) Personally, I always forget about module loading times, because I do most of my Perl work with mod_perl, with all modules loaded at server startup time.

        But I am pleasantly surprised by the low overhead of loading strict: being able to load strict more than 1 million times per second is nice. But on the other hand, it indicates to me that the OS has the file in RAM already and is serving it from there. So in that sense the benchmark is also flawed ;-).

        Anyway, I think we've proven there is not a lot to be gained by not using strict from a performance point of view. And that there is a lot to be lost from a development / maintenance point of view (if you don't use strict).

        Liz

        This is completely OT as far as this thread is concerned, and is totally meaningless (probably) anyway, but I'd dearly like an explaination anyway.

        Reading tachyon's modified benchmark it struck me that if purely the load time was of interest, then the simple expedient of deleting a non-existant hash key probably didn't quite cut it when it came to removing the "administrative overhead" of use from the equation, so I modified it slightly to create and delete a key each time to see the effect that had.

        The results are surprising, and somewhat confusing.

        Pre-tye code

        The biggy!! Post-tye correction.

        use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -3, { with => sub { delete $INC{'strict.pm'}; require 'strict.pm'; strict::import( 'refs' ); my $a = 1; my $b = 2 }, without => sub { $hash{key} = delete $hash{key}; my $a = 1; my $b = 2 }, } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 137/s -- -99% without 24226/s 17576% --

        Which apart from the fact that tachyon's machine is about 20x quicker than mine, is a mildly interesting result. However, I'm not sure that I'm really comparing eggs with eggs, so I had another go.

        This time, I thought I would force Benchmark to re-compile (eval) both snippets each time it exercised them, rather than just call a pre-compiled sub and I got these results.

        Pre-tye code

        Post-tye corrected benchmark

        use Benchmark 'cmpthese'; my %hash = ( key => 'value' ); cmpthese( -10, { with => q[ BEGIN{ delete $INC{'strict.pm'}; use strict 'refs' } my $a = 1; my $b = 2 ], without => q[ BEGIN{ $hash{'key'} = delete $hash{key} } my $a = 1; my $b = 2 ], } ); __END__ P:\test>..\bin\perl.exe junk.pl8 Rate with without with 368780/s -- -2% without 375558/s 2% -- P:\test>..\bin\perl.exe junk.pl8 Rate with without with 371638/s -- -1% without 374367/s 1% --

        Which, without drawing any conclusions as I am still not particularly certain that I am really comparing eggs with eggs, is much closer to my real world experience of benchmarking complete programs with and without use strict.

        Whenever I tried this before, I nearly always found that the differences were marginal, if detectable.

        Did any of my attempts get closer to a real world test?


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "Think for yourself!" - Abigail
        Hooray!