in reply to Re^2: Unpacking and converting
in thread Unpacking and converting

I think in this case the array creation and population makes significant influence on results. Consider this:
cmpthese -1,{ d => q[ my @d = '0001' .. '1000'; ], c => q[ my @c = '0001' .. '1000'; $c[ $_ ] += 0 for 0 .. $#c; ], a => q[ my @a = '0001' .. '1000'; $_ += 0 for @a; ], b => q[ my @b = '0001' .. '1000'; my @new; push @new, $_ + 0 while + defined( $_ = shift @b ) ], };
Results speak for themselves:
Rate b c a d b 1267/s -- -29% -37% -63% c 1794/s 42% -- -10% -47% a 2000/s 58% 11% -- -41% d 3413/s 169% 90% 71% --

Regards,
Alex.

Replies are listed 'Best First'.
Re^4: Unpacking and converting
by andal (Hermit) on Feb 16, 2011 at 10:39 UTC
    I think in this case the array creation and population makes significant influence on results. Consider this

    You missed the point. The shifting of elements from the array makes it empty after first iteration. But Benchmark makes thousands of iteration within 1 second. All of those iterations work with empty array, this explains why this approach appears to be "faster".

      Thank you for giving me my D'oh moment for today.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
      The shifting of elements from the array makes it empty after first iteration.

      Ugh, I seem to be extra stupid today. But hey, it's not that easy still:

      my @s = '0001' .. '1000'; cmpthese -1,{ d => q[ my @d = @s; ], c => q[ my @c = @s; $c[ $_ ] += 0 for 0 .. $#c; ], a => q[ my @a = @s; $_ += 0 for @a; ], b => q[ my @b = @s; my @new; push @new, $_ + 0 while defined( $_ = + shift @b ) ], };
      Yields:
      Rate c a b d c 793688/s -- -59% -59% -84% a 1941807/s 145% -- -0% -60% b 1942492/s 145% 0% -- -60% d 4812084/s 506% 148% 148% --
      So what do we have, shifting and for (@list) are equally fast? Not so:
      my @s = '0000001' .. '1000000'; cmpthese -1,{ d => q[ my @d = @s; ], c => q[ my @c = @s; $c[ $_ ] += 0 for 0 .. $#c; ], a => q[ my @a = @s; $_ += 0 for @a; ], b => q[ my @b = @s; my @new; push @new, $_ + 0 while defined( $_ = + shift @b ) ], };
      Gives these results:
      Rate c b a d c 764586/s -- -58% -62% -85% b 1803742/s 136% -- -10% -65% a 2007409/s 163% 11% -- -61% d 5119310/s 570% 184% 155% --
      Which is logical, at last. Scandal of the century is averted, I stand corrected - the only thing I have to find out is why my tests on actual data consistently show shifting to be faster than for (@list). Not several orders of magnitude faster as it was in that faulty benchmark but considerably so. I believe I have to look for an error...

      Regards,
      Alex.

Re^4: Unpacking and converting
by Anonyrnous Monk (Hermit) on Feb 16, 2011 at 10:26 UTC
    I think in this case the array creation and population makes significant influence on results.

    Sure, but in the original case the benchmark is flawed:  the test code is being executed many times, and after the first execution, the array @b has been emptied... (unlike the arrays for the other cases)

    Code that modifies test input is notoriously tricky to benchmark.