> I do some work (generate more data)
It really depends on what kind of work you are doing, and you keep us guessing.
If you are just processing the input files in a linear way (from one end to the other) better consider a sliding window or iterators reading chunks in general. (see Re: Memory Leak when slurping files in a loop (sliding window explained) )
If you need to access the data randomly, you "might" be better of by building a lookup index beforehand, to be able to only identify and load relevant parts into memory. (see also this approach to split up hashes and rely on swapping, but this will go the memory limits)
Of course there are dedicated solutions for the latter, called database-servers.
I agree with eyepops that memory is not that much of an issue nowadays, but if you plan on scaling things up in the future, a memory-frugal design would be reasonable.
Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery
In reply to Re: Memory efficient design
by LanX
in thread Memory efficient design
by harangzsolt33
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |