Your code above never does anything with the modified $item--ie. it never pushes it into @newdata, which means that your loop does exactly nothing, but slowly. However, looking past that...
This is weird but shifting items one by one into new array seems to be a lot faster than iterating over the array with canonical for (@list).
Update: Ignore. The Benchmark is broken!
Frankly, I thought you were talking through the top of your hat when you said that, until I benchmarked it. And, despite my spending a while trying to see a flaw in the benchmark, you seem to be right. Not only am I surprised that it seems to be true, but I'm utterly staggered by the difference in performance. And at an utter loss to explain why it should be the case.
our @a = @b = @c = '0001' .. '1000';; cmpthese -1,{ a => q[ $_ += 0 for @a; ], b => q[ my @new; push @new, $_ + 0 while defined( $_ = shift @b ) +], c => q[ $c[ $_ ] += 0 for 0 .. $#c; ] };; Rate c a b c 6220/s -- -37% -100% a 9893/s 59% -- -100% b 4562313/s 73247% 46016% --
And the bigger the array, the more extraordinary the difference becomes:
our @a = @b = @c = @d = '00001' .. '10000';; cmpthese -1,{ a => q[ $_ += 0 for @a;], b => q[ my @new; push @new, $_ + 0 while defined( $_ = shift @b ) +], c => q[ $c[ $_ ] += 0 for 0 .. $#c; ], d => q[ my @new = map $_ += 0, @d ] };; Rate d c a b d 258/s -- -58% -72% -100% c 615/s 138% -- -34% -100% a 932/s 261% 52% -- -100% b 4651085/s 1800042% 756579% 499189% --
There is something amiss here, but if it is the benchhmark I cannot see it.
And if not, I'm loathed to explain why creating a new array by pushing them one at a time whilst destroying the old one, would be so much faster than iterating over the original in place.
In reply to Re: Unpacking and converting
by BrowserUk
in thread Unpacking and converting
by dwalin
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |