in reply to Re^2: Performance problem with Clone Method
in thread Performance problem with Clone Method

Are these matrices each the same size? Could you recycle the underlying arrays from abandoned branches?

Or if you have some idea of your storage requirements per branch, maybe you could prealloc in some chunk size large enough to hold several (or more) of your matrices and keep track of offsets to free ones. So your indexing becomes M(i,j) => $all_arrays[$offset + m*ncols + n] where you find $offset upon doing a clone by consulting some kind of free list or bitmap. Your underlying array(s) size would be n * nrows * ncols where n is your chunk size, $offset would take values in [0, nrows*ncols, 2*nrows*ncols, ... n - nrows*ncols] I guess this is kind of writing your own memory manager, which may be too much trouble, but perhaps it would payoff. I'm assuming here that the worst part of your algorithm's performance (assuming it's already the best algorithm as an algorithm that you know) comes from memory allocation and that asking perl to allocate one large array once will be faster than asking it many times to allocate small arrays, but my intuition may be skewed from doing non-Perl work.

Replies are listed 'Best First'.
Re^4: Performance problem with Clone Method
by Commandosupremo (Novice) on Jul 27, 2011 at 00:48 UTC

    In response to your first question, yes, all the matrices are the same size and in response to the second, maybe, but I do not know how I could easily do that.

    However, I can easily figure out the maximum number of branches and since I know the size of each matrix (since they are all the same) I believe that it would be possible to pre-allocate them as you suggested.