I guess that creating 500e06 sub-arrays is the most memory consuming part.
I can't tell how efficient the usage of @d is, because it depends on how sparse the entries are used.
32 bits means 4.2e9 potential entries but with 500e6/256 = 1953125 in worst case you'll end up with 2200 empty entry ratio in @d. I bet a hash is more memory efficient then.
But it really depends what you are trying to achieve with all that data.
> which don't occur more than 256 times each.
your "test" doesn't reflect that.
Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery
In reply to Re: Memory efficient way to deal with really large arrays?
by LanX
in thread Memory efficient way to deal with really large arrays?
by sectokia
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |