in reply to Growing strings, avoiding copying and fragmentation?

Many memory allocators double their size each time they run out. That is, they would first allocate 8K, then if that filled up 16K, then 32K, etc. On very large files, it guarantees that you'll only spend logarithmic time allocating and re-organizing memory, and you'll never have more than twice as much memory allocated as you need. A variation of this uses Fibonacci numbers, which grow more slowly than doubling but more quickly than increasing by a fixed size.

Something like that might be worth a try, though you could find that the memory allocator is doing that already for you, and you're best off just trusting it.

As always, try different things and then benchmark. One thing that might be easy to overlook in this case is whether your caller is likely to be allocating lots of other memory at the same time as your code is running. If only your code is running, you can use just about any allocation strategy without much trouble because you're the only memory consumer, so there will be little fragmentation. With other code running too, you may find that different strategies are important. One way to test this is to call your module multiple times, each working on a different file. That should stress-test your code and the allocator pretty well. Also make sure you try it with memory that Perl's allocating, to make sure your strategy plays well with Perl's.

  • Comment on Re: Growing strings, avoiding copying and fragmentation?