Ovid has asked for the wisdom of the Perl Monks concerning the following question:
I've been thinking recently about using warnings in Perl. Turning on warnings during development is a great thing and should be encouraged. However, sometimes warnings warn you of things that are not really appropriate and you are forced to add code to check for this:
In the code above, what if some_sub() returns undef? If you have warnings turned on, Perl will complain that you've tried to use an uninitialized value.my $x = some_sub(); if ( $x > 2 ) { # do something here }
So what? I know that an uninitilized value evaluates as zero, so the above code should run fine (assuming that an undef value is not indicative of an issue that I should be addressing in my code). If I want to suppress the warning, I can (in 5.6) use lexically scoped warnings. This is overkill. Prior to 5.6, I could use dynamically scoped suppression of warnings with local $^W. Again, this seems like overkill if I am just turning off warnings for a conditional. The simplest, and most common, way of disabling this warning is the following:
Well, that's just great. Now I've added some overhead to my program to test for something that's really only a problem because I've used warnings. Ugh! Why should I care if $x is defined if it's just going to evaluate as zero? If the conditional were $x < 2, then I might be worried about whether I want to distinguish an undefined value from zero, but that's not the case here.if ( defined $x && $x > 2 ) {}
Going to such lengths to worry about optimizations seems a bit ridiculous, but to quote gnat in http://prometheus.frii.com/~gnat/yapc/1999-lies/slide3.html:
Speed does matter, just ask your girlfriend. The difference between 2 seconds and 3 seconds is, duh, 1 second. Do that 100,000 times, though, and you've got over a day being wasted. Not every speed problem is the fault of a poor algorithm.Since I wanted to address this issue, I decided to try a benchmark:
This produced the following output on my box:#!/usr/bin/perl use Benchmark; use strict; my ( @nums, $x ); for (1..1000) { my $r = int(rand(5)); push @nums, $r > 0 ? $r : undef; } timethese(-10, { Test_defined => 'for (@nums) { if ( defined $_ and $_ > 2 ) { $x++ + }}', Do_not_test => 'for (@nums) { if ( $_ > 2 ) { $x++ }}' });
What? The extra test for $_ does not appear to have affected the execution speed. I have run this several times (and for up to 30 CPU seconds for each) and I am getting this result consistently. Am I missing something here? Shouldn't testing whether a variable is defined have some impact on execution speed, no matter how minimal or have I just created a poor benchmark?Benchmark: running Do_not_test, Test_defined, each for at least 10 CPU + seconds... Do_not_test: 11 wallclock secs (10.01 usr + 0.00 sys = 10.01 CPU) @ 3 +65830.27/s (n=3661961) Test_defined: 8 wallclock secs (10.02 usr + 0.00 sys = 10.02 CPU) @ +363485.63/s (n=3642126)
I would also like to know the opinion of other monks regarding their opinions regarding irrelevant tests for "definedness". I know that usually warnings for use of undefined variables is a good thing, but I don't want to test for something I don't need to. Additionally, such erros can fill up my error logs and make finding important errors more difficult (though I confess that often those "undefined" errors are highly relevant).
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
|
|---|