in reply to Re^4: die rather than exit on out-of-memory failure?
in thread die rather than exit on out-of-memory failure?

I considered the approach of a trial malloc-then-free before any perl malloc for a given data structure. That approach would work if perl was built using the system malloc.

Perl's malloc seems to be optimized for speed rather than memory usage since for a very large malloc size (say 200MiB) the perl process would often grow by something like 400MiB. The result is that using perl's malloc when you check first, you need to check something like 2X the size you actually need to avoid perl death by malloc.

Using the system malloc should work better but it is still not foolproof and if it fails---boom goes perl.

  • Comment on Re^5: die rather than exit on out-of-memory failure?

Replies are listed 'Best First'.
Re^6: die rather than exit on out-of-memory failure?
by BrowserUk (Patriarch) on Jan 06, 2011 at 05:38 UTC
    That approach would work if perl was built using the system malloc.

    Even when perl is built to use its own malloc, all the memory is actually acquired from the OS using a call to the CRT malloc. Or in the case of Win32 and perhaps others, directly to the OS allocator for very large allocations.

    for a very large malloc size (say 200MiB) the perl process would often grow by something like 400MiB.

    That generally comes about because for statements like:

    my $x = 'x' x 200e6;

    First 200MB is allocated to construct the right-hand side, and then a further 200MB is allocated for $x and the data is copied over. On win32, the first 200MB is then immediately released back to the OS, but it does mean that you need to able to allocate double the actual requirement. Sad but true.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.