in reply to How to save memory, parsing a big file.

I don't know if this is still the case but it used to be that when perl grew data structures for you, it would double the amount of memory it was using each time even if you really only needed just 1 more element. You could try to give your hash(es) a good number of buckets to start with by assigning to keys thusly:

keys %total = 500;
Where 500 is the number of buckets you think your hash is likely to have (you'll have to determine this empirically).