Yes, there is a great deal going on prior to this shuffle routine. The 43-mil array of number pairs being shuffled is generated from some calculations done earlier in the script, and those calcs themselves take two files each of size 2.6GB. Since this is not my code, it causes me no shame to say that those files are read in thus:
open(TEMP1, $overlap_files[$i]) || die "\n\nCan't open $overlap_files[ +$i]!!\n\n"; open(TEMP2, $overlap_files[$j]) || die "\n\nCan't open $overlap_files[ +$j]!!\n\n"; my @file1 = <TEMP1>; my @file2 = <TEMP2>; close TEMP1; close TEMP2;
While it's doing that, you can watch the memory usage grow like a escalator to nowhere. Those arrays of strings then get iterated, split, parsed, calculated on, and eventually pairs of "interesting" values from each of them get pushed, one by one, onto the 43-mil array that is the subject of the shuffle. So by the time the shuffle gets called, we're quite a ways into the swap.
Thank you all for all the analysis and ideas. You pretty much confirmed what I thought I was seeing, and despite my personal desire to re-write this thing in C, I think the biggest boon for my buck is going to simply be doubling the memory on this machine before starting on the analysis runs she has planned for me. It sounds like having this whole array in physical memory prior to calling the shuffle is likely to drastically reduce the run-time... more than anything else I can squeeze out with software optimization alone.
In reply to Re^2: Very Large Arrays
by Desade
in thread Very Large Arrays
by Desade
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |