Perl: the Markov chain saw | |
PerlMonks |
Re: Memory efficient way to deal with really large arrays?by LanX (Saint) |
on Dec 13, 2020 at 00:17 UTC ( [id://11125082]=note: print w/replies, xml ) | Need Help?? |
>
push @p, [ $p1, $p2]; I guess that creating 500e06 sub-arrays is the most memory consuming part. I can't tell how efficient the usage of @d is, because it depends on how sparse the entries are used. 32 bits means 4.2e9 potential entries but with 500e6/256 = 1953125 in worst case you'll end up with 2200 empty entry ratio in @d. I bet a hash is more memory efficient then. But it really depends what you are trying to achieve with all that data.
update> which don't occur more than 256 times each. your "test" doesn't reflect that.
Cheers Rolf
In Section
Seekers of Perl Wisdom
|
|