in reply to Re: Non-destructive array processing
in thread Non-destructive array processing
my @ary_copy = (@array); straightforward, easy to understand
I agree, and it is exactly how I usually do this. Unfortunately, I ran into some huge data and couldn't copy without installing additional RAM :)
But the closure trick is very clever. I wonder which is faster
Wonder no more, unless my benchmark is wrong, of course :)
#!/usr/bin/perl -w use strict; use Benchmark qw(cmpthese); our @array; sub pdcawley { my @copy = @array; while (my @chunk = splice @copy, 0, 2) { } } sub juerd { my $refs = sub { \@_ }->(@array); while (my @chunk = splice @$refs, 0, 2) { } } sub bench { printf "\n\e[1m%s\e[0m\n", shift; cmpthese(-10, { pdcawley => \&pdcawley, juerd => \&juerd }); } @array = (1) x 32767; bench "Long array, tiny values"; @array = ("x" x 32) x 32767; bench "Long array, small values"; @array = (1) x 32; bench "Short array, tiny values"; @array = ("x" x 32) x 32; bench "Short array, small values"; @array = ("x" x (2**20)) x 32; bench "Short array, large values"; @array = ("x" x (8 * 2**20)) x 32; bench "Short array, huge values";
(Note: stripped)
Long array, tiny values pdcawley 26.1/s -- -17% juerd 31.4/s 20% -- Long array, small values pdcawley 12.9/s -- -38% juerd 20.7/s 60% -- Short array, tiny values pdcawley 32909/s -- -1% juerd 33197/s 1% -- Short array, small values pdcawley 19203/s -- -17% juerd 23084/s 20% -- Short array, large values pdcawley 1.83/s -- -53% juerd 3.89/s 112% -- Short array, huge values pdcawley 4.32 -- -53% juerd 2.04 112% --
I'd like to test it with an array of 32 elements of 20 MB each, but the copy doesn't fit in memory.
Anyhow, it seems that using the array of aliases is much more efficient than using a copy, especially with large data sets.
Juerd
- http://juerd.nl/
- spamcollector_perlmonks@juerd.nl (do not use).
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: Non-destructive array processing
by pdcawley (Hermit) on Jan 21, 2003 at 20:00 UTC |