in reply to Re^2: Very Large Arrays
in thread Very Large Arrays

I think the biggest boon for my buck is going to simply be doubling the memory on this machine

If the machine has the hardware capacity, that's definitely the easiest option.

That said, you could probably make a few small changes to the script that would substantially reduce the memory requirement without having to rewrite it in C.

For example, changing the small snippet you showed to the following will substantially reduce the memory requirement (at that point):

my $n = 0; open(TEMP1, $overlap_files[$i]) || die "\n\nCan't open $overlap_files[ +$i]!!\n\n"; my $n = 0; my @file1; $file1[ $n++ ] = $_ while <TEMP1>; close TEMP1; open(TEMP2, $overlap_files[$j]) || die "\n\nCan't open $overlap_files[ +$j]!!\n\n"; $n = 0; my @file2 $file2[ $n++ ] = $_ while <TEMP2>; close TEMP2;

And building a big string instead of an array for the shuffle would reduce it further.

Often, just a little inspection can show where large amounts of data can be incrementally thrown away as you are finished with it, and can add up to huge savings.

For example, by the time you are ready to do the shuffle, have you finished with the input arrays?

And, does the algorithm for building the list in interesting pairs require that both input arrays be loaded into memory in their entirety, or could you processes them 1 line at a time? Perhaps reading them in lockstep.

Anyway, good luck.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^4: Very Large Arrays
by Desade (Initiate) on Mar 16, 2012 at 16:12 UTC

    As a follow-up, I mostly left the script alone, since the wife is in the final weeks of a really big analytic paper and I can't risk slowing her down with my debugging. But I did double the memory on the machine (to 12GB), and saw the clock-time cost of the Fisher-Yates shuffle drop from the ~hour mentioned above to about 45 seconds to do all 43-million iterations.

    So I think we can safely say it was a swapping issue...

    Thanks for all the kind suggestions and analysis!