in reply to Re: Very Large Arrays
in thread Very Large Arrays

Yes, there is a great deal going on prior to this shuffle routine. The 43-mil array of number pairs being shuffled is generated from some calculations done earlier in the script, and those calcs themselves take two files each of size 2.6GB. Since this is not my code, it causes me no shame to say that those files are read in thus:

open(TEMP1, $overlap_files[$i]) || die "\n\nCan't open $overlap_files[ +$i]!!\n\n"; open(TEMP2, $overlap_files[$j]) || die "\n\nCan't open $overlap_files[ +$j]!!\n\n"; my @file1 = <TEMP1>; my @file2 = <TEMP2>; close TEMP1; close TEMP2;

While it's doing that, you can watch the memory usage grow like a escalator to nowhere. Those arrays of strings then get iterated, split, parsed, calculated on, and eventually pairs of "interesting" values from each of them get pushed, one by one, onto the 43-mil array that is the subject of the shuffle. So by the time the shuffle gets called, we're quite a ways into the swap.

Thank you all for all the analysis and ideas. You pretty much confirmed what I thought I was seeing, and despite my personal desire to re-write this thing in C, I think the biggest boon for my buck is going to simply be doubling the memory on this machine before starting on the analysis runs she has planned for me. It sounds like having this whole array in physical memory prior to calling the shuffle is likely to drastically reduce the run-time... more than anything else I can squeeze out with software optimization alone.

Replies are listed 'Best First'.
Re^3: Very Large Arrays
by BrowserUk (Patriarch) on Feb 16, 2012 at 09:48 UTC
    I think the biggest boon for my buck is going to simply be doubling the memory on this machine

    If the machine has the hardware capacity, that's definitely the easiest option.

    That said, you could probably make a few small changes to the script that would substantially reduce the memory requirement without having to rewrite it in C.

    For example, changing the small snippet you showed to the following will substantially reduce the memory requirement (at that point):

    my $n = 0; open(TEMP1, $overlap_files[$i]) || die "\n\nCan't open $overlap_files[ +$i]!!\n\n"; my $n = 0; my @file1; $file1[ $n++ ] = $_ while <TEMP1>; close TEMP1; open(TEMP2, $overlap_files[$j]) || die "\n\nCan't open $overlap_files[ +$j]!!\n\n"; $n = 0; my @file2 $file2[ $n++ ] = $_ while <TEMP2>; close TEMP2;

    And building a big string instead of an array for the shuffle would reduce it further.

    Often, just a little inspection can show where large amounts of data can be incrementally thrown away as you are finished with it, and can add up to huge savings.

    For example, by the time you are ready to do the shuffle, have you finished with the input arrays?

    And, does the algorithm for building the list in interesting pairs require that both input arrays be loaded into memory in their entirety, or could you processes them 1 line at a time? Perhaps reading them in lockstep.

    Anyway, good luck.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    The start of some sanity?

      As a follow-up, I mostly left the script alone, since the wife is in the final weeks of a really big analytic paper and I can't risk slowing her down with my debugging. But I did double the memory on the machine (to 12GB), and saw the clock-time cost of the Fisher-Yates shuffle drop from the ~hour mentioned above to about 45 seconds to do all 43-million iterations.

      So I think we can safely say it was a swapping issue...

      Thanks for all the kind suggestions and analysis!