Someone has suggested packing the data
You forgot who?
This doesn't attempt to perform your required processing, but just demonstrates that it is possible to have two 8400x17120 element datasets in memory concurrently, provided you use the right formats for storing them.
From what you said, @aod only ever holds a single char per element, so instead of using a whole 64-byte scalar for each element, use strings of chars for the second level of @aod and use substr to access the individual elements.
For @aob, you need only integers, so use Tie::Array::Packed for that. It uses just 4-bytes per element instead of 24, but as it is tied, you use it just as you would a normal array.
Putting those two together, you can have both your arrays fully populated in memory and it uses around 1.2GB instead of 9GB as would be required with standard arrays:
#! perl -slw use strict; use Tie::Array::Packed; #use Math::Random::MT qw[ rand ]; $|++; my @aod = map { 'd' x 17120; } 1 .. 8400; ## To access individual elements of @aod ## instead of $aod[ $i ][ $j ] use: ## substr( $aod[ $i ], $j, 1 ); my @aob; for ( 1 .. 8400 ) { printf "\r$_"; tie my @row, 'Tie::Array::Packed::Integer'; @row = map{ 1e5 + int( rand 9e5 ) } 1 .. 17120; push @aob, \@row; } ## For @aob use the normal syntax $aob[ $i ][ $j ] ## but remember that y0u can only store integers <>; print $aob[ $_ ][ 10000 ] for 1 .. 8400;
In reply to Re^2: Handling HUGE amounts of data
by BrowserUk
in thread Handling HUGE amounts of data
by Dandello
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |