in reply to Memory usage & hashes of lists of hashes of lists

You are getting about the efficiency that I would expect for a HoLoHoL. As a rule of thumb, an array is about 66% memory efficient. This efficiency comes from the doubling algorithm for allocating array memory.

I don't know but I am guessing that a hash has about the same efficiency.

The problem is that a deep memory structure multiplies these inefficiencies together. So you have
9000 records * 5k /.66/.66/.66/.66 = 237.2meg

Technical solutin 1
One solution is to use a flatter data structure. If you use a single-level hash with a key of Rem|Schema|Ticket it will use much less memory. Of course it will also require more code and more cpu.

Technical solution 2
Presize the arrays so that they don't allocate so much memory. This is easy for simple arrays but I have not seen it done for deeper data structures.

Political solution 1
It sounds to me like your server admins need to find more important work than making you save 200 meg of memory. Perhaps that issue is best saved for when you get asked for comments on their employee evaluation :-). More likely is that you have created a really fast application that is embarrassing the server people, and they want you to slow it down so their solutions don't look so bad in comparison. So they complain about a trivial amount of memory. If you offer to teach the server admins perl, they may stop complaining. It worked for me once!

It should work perfectly the first time! - toma

  • Comment on Re: Memory usage & hashes of lists of hashes of lists