You're unnecessarily using a second map level. Since your matrix is 2d (not additional layers) there's no need to do any more than to copy each 2nd layer row, which can be done with [ @$_ ] within the outter map
In testing the code below I verified using Data::Dumper that both copies of @matrix were identical in structure. Then I created a copy and changed one of its elements to "Problem!". Then I re-printed the original @matrix to ensure that "Problem!" didn't propagate back to the original matrix (which would have indicated that I didn't get a copy, but rather an alias).
After I was sure that I had duplicated your original functionality I went ahead with benchmarks. As you'll see the new "onemap" method is significantly faster.
use strict; use warnings; use Benchmark qw/cmpthese/; my @matrix = ( [ 1, 2, 3, 4, 5, ], [ 6, 7, 8, 9, 10, ], [ 11, 12, 13, 14, 15, ], [ 16, 17, 18, 19, 20, ], [ 21, 22, 23, 24, 25, ], ); cmpthese( 50000, { onemap => sub{ my $result = onemap( \@matrix ); }, twomaps => sub{ my $result = twomaps( \@matrix ); }, } ); sub onemap { my $matrix = shift; return { Matrix => [ map { [ @$_ ] } @$matrix ] }; } sub twomaps { my $matrix = shift; return { Matrix => [ map { my $row = $_; [ map { ( $_ ); } @$row ] } @$matrix ] }; } __END__ Rate twomaps onemap twomaps 62814/s -- -39% onemap 103306/s 64% --
You would need to plug the algorithm back into your method call function.
Update with additional benchmarking:
I added the Clone module to my previous example and benchmarked all over again. I also generated a matrix of 100x100 random integers to more closely approximate the size of your datastructure. As I suspected with a larger datastructure time spent in subroutine call overhead melted into the background, better showcasing the performance differences between the algorithms themselves. Here's the code and the results:
use strict; use warnings; use Benchmark qw/cmpthese/; use Data::Dumper; use Clone qw/clone/; my @matrix = map { [ map { int( rand( 100 ) ) } 0 .. 99 ] } 0 .. 99; cmpthese( 1000, { onemap => sub{ my $result = onemap( \@matrix ); }, twomaps => sub{ my $result = twomaps( \@matrix ); }, clone => sub{ my $result = clone( \@matrix ); }, } ); sub onemap { my $matrix = shift; return { Matrix => [ map { [ @$_ ] } @$matrix ] }; } sub twomaps { my $matrix = shift; return { Matrix => [ map { my $row = $_; [ map { ( $_ ); } @$row ] } @$matrix ] }; } __END__ Rate clone twomaps onemap clone 107/s -- -78% -93% twomaps 475/s 344% -- -70% onemap 1562/s 1362% 229% --
As you can see the "onemap" technique is not just a little faster, it's a lot faster. If 80% of your time is spent in these functions, your runtime should be more than cut in half. It also becomes obvious that when performance is more important than flexibility the custom made clone is like comparing day to night against Clone::clone()
Dave
In reply to Re: Performance problem with Clone Method
by davido
in thread Performance problem with Clone Method
by Commandosupremo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |