Re: "my" cost
by Fletch (Bishop) on Aug 17, 2024 at 01:22 UTC
|
The my has both a compile time as well as runtime affect. At compile time it says this variable will live lexically in the declared scope. That happens once when the code is compiled to the optree. There also is a runtime effect that an opcode (opcodes?) will run which allocates that new variable on the stack/stash. It’s that extra opcode that’s making the difference between being done once before the loop versus each iteration of the loop.
In your example it’s just changing the runtime, but the difference in scoping changes the lifetime of the stored value. For instance if the stored value was an object with a destructor the first case would get (possibly) unreferenced when the next loop iteration happened, while the second would be unreferenced when each loop iteration completed. Not a big difference here but it’s worth remembering the compile/run dual nature.
Edit: Of course after submitting I remember another slightly surprising corner case: things like grep that have both an EXPR and BLOCK form the former will be similarly faster because the expression is run in the context of the enclosing block, whereas the later has to run a couple extra block enter/leave steps.
The cake is a lie.
The cake is a lie.
The cake is a lie.
| [reply] [d/l] |
|
Nice, this will take awhile to eat.
| [reply] |
Re: "my" cost
by eyepopslikeamosquito (Archbishop) on Aug 17, 2024 at 03:27 UTC
|
G'day Danny,
I see you have a keen interest in code performance.
To help you improve further in this area, you might be interested in:
While comparing code performance can be a fun way to learn, I feel obliged to warn of its dangers:
| [reply] [d/l] |
|
i like your eyes: lots to look at :)
| [reply] |
Re: "my" cost
by GrandFather (Saint) on Aug 18, 2024 at 11:38 UTC
|
use Time::HiRes qw(time);
my @results = ();
push @results, [TimeIt()] for 1 .. 10;
shift @results; # First result is polluted by startup processing
@results = sort {$a->[0] <=> $b->[0]} @results;
printf "Out: %.5f In: %.5f diff: %+.6f\n", @$_, $_->[1] - $_->[0] for
+ @results;
printf "Out delta %f\n", $results[-1][0] - $results[0][0];
@results = sort {$a->[1] <=> $b->[1]} @results;
printf "In delta %f\n", $results[-1][1] - $results[0][1];
sub TimeIt {
my ($i);
my $start = time;
{
my $x;
for $i (1..1e6) {
$x = 1;
}
}
my $end = time;
my $deltaOut = $end - $start;
$start = time;
for $i (1..1e6) {
my $x = 1;
}
$end = time;
my $deltaIn = $end - $start;
return $deltaOut, $deltaIn;
}
For one run prints:
Out: 0.01120 In: 0.01609 diff: +0.004889
Out: 0.01121 In: 0.01474 diff: +0.003521
Out: 0.01123 In: 0.01501 diff: +0.003781
Out: 0.01126 In: 0.01509 diff: +0.003836
Out: 0.01128 In: 0.01494 diff: +0.003654
Out: 0.01135 In: 0.01491 diff: +0.003559
Out: 0.01148 In: 0.01524 diff: +0.003761
Out: 0.01291 In: 0.01497 diff: +0.002066
Out: 0.01358 In: 0.01504 diff: +0.001458
Out delta 0.002376
In delta 0.001356
Note that the difference between the fastest and slowest "outside the loop" times was close to the difference between the individual test "inside loop" - "outside loop" times. This changes somewhat from run to run, but ultimately at this level your code's performance often depends more on external factors than the code itself.
Perhaps also note that my results are about a factor of two faster than your results so maybe you just need to buy a faster computer? That's not meant to be mean or snarky, but just to point out that micro optimisation of this sort can usually be beaten by either getting faster hardware or improving algorithms. You may be interested in Youtube - Matt Parker: Someone improved my code by 40,832,277,770% to see the extreme version of this comment!
Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
| [reply] [d/l] [select] |
|
micro optimisation of this sort can usually
be beaten by either getting faster hardware or improving algorithms.
You may be interested in Youtube - Matt Parker: Someone improved my code by 40,832,277,770% to see the extreme version of this comment!
Spot on Gramps, especially improving algorithms!
If you don't do that, your carefully micro optimized code will likely end up as a fast slow program.
While your cited example of improving the running time from 32 days down to 0.006761 seconds
is certainly extreme, it didn't really surprise me, after years (off and on) of
becoming obsessed with performance challenges.
As documented in gory detail here (see The 10**21 Problem links),
I remember gasping in disbelief a few years back when, obsessively working alone,
I eventually coaxed some complex code into running 50 million times faster.
IIRC, it took me about a year and I remember being constantly surprised
when one insight led to another ... and another ... and another.
When you open up performance challenges to fierce competition (especially if marioroy is involved)
expect astonishing things to happen. :-)
| [reply] |
Re: "my" cost
by Anonymous Monk on Aug 17, 2024 at 01:14 UTC
|
Creating a variable and assigning a value takes more time than assigning a value to an existing variable. That seems very reasonable. Is 0.00722599029541 seconds really a long time? | [reply] |
|
Is 0.00722599029541 seconds really a long time?
It is if you need to iterate through the loop enough times!
However, I don't concern myself with how long things take until they take "too long". At that point, I look to see why they are taking so long and what we can do about it.
| [reply] |
|
It is if you need to iterate through the loop enough times!
How do you figure that?
If you're going to loop a million times, it's still only takes 0.0007 s cumulatively. It will only add up to 7 s if you do a billion passes of the loop. If you're doing something a billion times in Perl, it's going to take an hour, a day, or more. An extra 7 s isn't going to matter. This is the point where you offload the work to C or something, not move a my.
| [reply] [d/l] |
|
Doesn't seem like a long time, but I've always been curious about this. I suppose in some scenarios it could be beneficial to declare the my variable outside the loop.
| [reply] |
|
| [reply] [d/l] |
Re: "my" cost
by ikegami (Patriarch) on Aug 18, 2024 at 02:59 UTC
|
You "showed" a cost of 0.000,000,007 s or 7 ns.
That's the time needed to clear $x's value.
| [reply] [d/l] |
|
| [reply] |
|
So we're clear, the point I was making is that even if one is desperate about speed, this is not the change one should make, because one could save seconds by doing other changes instead.
Upd: Oops, thought the parent was a reply to Re^3: "my" cost
| [reply] |
Re: "my" cost
by ibm1620 (Hermit) on Aug 18, 2024 at 19:39 UTC
|
Strangely enough, I get the following results running on perl 5.40, MacBook
Air M1 2020, MacOS 14.6.1:
Air:~/private/perl$ tim
v5.40.0
0.0340678691864014 my outside
0.0305979251861572 my inside
Air:~/private/perl$ tim
v5.40.0
0.028965950012207 my outside
0.0283629894256592 my inside
Air:~/private/perl$ tim
v5.40.0
0.0341758728027344 my outside
0.0308210849761963 my inside
Air:~/private/perl$ tim
v5.40.0
0.0342879295349121 my outside
0.0307750701904297 my inside
Air:~/private/perl$ tim
v5.40.0
0.0342922210693359 my outside
0.0308101177215576 my inside
Running the same code on my 2012 Intel MacBookPro, OSX 10.15.7, perl 5.40, gives the expected results (outside is faster). | [reply] [d/l] |