You talk about loading the whole file into an array even though you process it line by line. This is obviously a waste of memory. Read the file a line at a time.
So that leaves the size of the hash. You could start by using a more efficient data structure, such as Judy::HS. If that isn't enough, you could use a disk-based solution such as DB_File.
In reply to Re: Best way to manage memory when processing a large file?
by ikegami
in thread Best way to manage memory when processing a large file?
by Isoparm
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |