in reply to Re^2: Very Large Arrays
in thread Very Large Arrays
I think the biggest boon for my buck is going to simply be doubling the memory on this machine
If the machine has the hardware capacity, that's definitely the easiest option.
That said, you could probably make a few small changes to the script that would substantially reduce the memory requirement without having to rewrite it in C.
For example, changing the small snippet you showed to the following will substantially reduce the memory requirement (at that point):
my $n = 0; open(TEMP1, $overlap_files[$i]) || die "\n\nCan't open $overlap_files[ +$i]!!\n\n"; my $n = 0; my @file1; $file1[ $n++ ] = $_ while <TEMP1>; close TEMP1; open(TEMP2, $overlap_files[$j]) || die "\n\nCan't open $overlap_files[ +$j]!!\n\n"; $n = 0; my @file2 $file2[ $n++ ] = $_ while <TEMP2>; close TEMP2;
And building a big string instead of an array for the shuffle would reduce it further.
Often, just a little inspection can show where large amounts of data can be incrementally thrown away as you are finished with it, and can add up to huge savings.
For example, by the time you are ready to do the shuffle, have you finished with the input arrays?
And, does the algorithm for building the list in interesting pairs require that both input arrays be loaded into memory in their entirety, or could you processes them 1 line at a time? Perhaps reading them in lockstep.
Anyway, good luck.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Very Large Arrays
by Desade (Initiate) on Mar 16, 2012 at 16:12 UTC |