in reply to Performance problem with Clone Method

You're unnecessarily using a second map level. Since your matrix is 2d (not additional layers) there's no need to do any more than to copy each 2nd layer row, which can be done with [ @$_ ] within the outter map

In testing the code below I verified using Data::Dumper that both copies of @matrix were identical in structure. Then I created a copy and changed one of its elements to "Problem!". Then I re-printed the original @matrix to ensure that "Problem!" didn't propagate back to the original matrix (which would have indicated that I didn't get a copy, but rather an alias).

After I was sure that I had duplicated your original functionality I went ahead with benchmarks. As you'll see the new "onemap" method is significantly faster.

use strict; use warnings; use Benchmark qw/cmpthese/; my @matrix = ( [ 1, 2, 3, 4, 5, ], [ 6, 7, 8, 9, 10, ], [ 11, 12, 13, 14, 15, ], [ 16, 17, 18, 19, 20, ], [ 21, 22, 23, 24, 25, ], ); cmpthese( 50000, { onemap => sub{ my $result = onemap( \@matrix ); }, twomaps => sub{ my $result = twomaps( \@matrix ); }, } ); sub onemap { my $matrix = shift; return { Matrix => [ map { [ @$_ ] } @$matrix ] }; } sub twomaps { my $matrix = shift; return { Matrix => [ map { my $row = $_; [ map { ( $_ ); } @$row ] } @$matrix ] }; } __END__ Rate twomaps onemap twomaps 62814/s -- -39% onemap 103306/s 64% --

You would need to plug the algorithm back into your method call function.

Update with additional benchmarking:

I added the Clone module to my previous example and benchmarked all over again. I also generated a matrix of 100x100 random integers to more closely approximate the size of your datastructure. As I suspected with a larger datastructure time spent in subroutine call overhead melted into the background, better showcasing the performance differences between the algorithms themselves. Here's the code and the results:

use strict; use warnings; use Benchmark qw/cmpthese/; use Data::Dumper; use Clone qw/clone/; my @matrix = map { [ map { int( rand( 100 ) ) } 0 .. 99 ] } 0 .. 99; cmpthese( 1000, { onemap => sub{ my $result = onemap( \@matrix ); }, twomaps => sub{ my $result = twomaps( \@matrix ); }, clone => sub{ my $result = clone( \@matrix ); }, } ); sub onemap { my $matrix = shift; return { Matrix => [ map { [ @$_ ] } @$matrix ] }; } sub twomaps { my $matrix = shift; return { Matrix => [ map { my $row = $_; [ map { ( $_ ); } @$row ] } @$matrix ] }; } __END__ Rate clone twomaps onemap clone 107/s -- -78% -93% twomaps 475/s 344% -- -70% onemap 1562/s 1362% 229% --

As you can see the "onemap" technique is not just a little faster, it's a lot faster. If 80% of your time is spent in these functions, your runtime should be more than cut in half. It also becomes obvious that when performance is more important than flexibility the custom made clone is like comparing day to night against Clone::clone()


Dave

Replies are listed 'Best First'.
Re^2: Performance problem with Clone Method
by Commandosupremo (Novice) on Jul 26, 2011 at 23:19 UTC

    Dave:

    I made the change from the nested maps to a single map and it made a considerable improvement, thank you very much. I've never used the benchmark package, but I think I will start using it as well, its output looks more useful to me than SmallProf's.

      Glad it helped. By the way, there's a module in the core distribution, Clone, which provides increased flexibility (it can handle datastructures of arbitrary shape) with its clone() function. However, it is actually considerably slower than even your first method. It's an example of where the increased abstraction that makes it a more useful tool all around also carries with it a performance impact.

      diotalevi has (or had perhaps) a module on CPAN, Clone::Fast, but when I went to install it with cpan Clone::Fast, the cpan utility couldn't find it. I would like to have tried benchmarking it.

      Another aside: Storable (also in the core distribution) offers dclone(), but according to the Clone documentation it's even slower, while being even more flexible.


      Dave

        I will be sure to give those modules a look over, I don't think they will help me much now (as I am only cloning the matrices), but they may in the future.