in reply to Re^3: die rather than exit on out-of-memory failure?
in thread die rather than exit on out-of-memory failure?

I guess the problem with trying to fix this, would be addressing all the places in the rest of the codebase that have long been coded on the assumption that if malloc(), or whichever of the myriad wrappers is used to call it, returns, then the requested memory was available. Though I feel pretty certain I've seen plenty of code that checks the return from Newxx() etc. That said, I wouldn't expect there to be many places where large contiguous chunks of memory are allocated.

As is, the only pragmatic step the PDL authors might make would be to try calling the OS memory allocator directly for large allocations first. If the OS says okay, then give that memory back to the OS and immediately call Perl's malloc() for it. The window of cases when, the OS says yes and Perl no, should be pretty small. But that would still require action by the authors of PDL and any other similar modules that routinely allocate and manipulate large contiguous chunks of ram.

Perhaps the simplest solution would be a new module that a user program can call to check whether the process will be able to satisfy a particular allocation request. Say Devel::MemCheck::memCheck().


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^5: die rather than exit on out-of-memory failure?
by chm (Novice) on Jan 06, 2011 at 04:27 UTC

    I considered the approach of a trial malloc-then-free before any perl malloc for a given data structure. That approach would work if perl was built using the system malloc.

    Perl's malloc seems to be optimized for speed rather than memory usage since for a very large malloc size (say 200MiB) the perl process would often grow by something like 400MiB. The result is that using perl's malloc when you check first, you need to check something like 2X the size you actually need to avoid perl death by malloc.

    Using the system malloc should work better but it is still not foolproof and if it fails---boom goes perl.

      That approach would work if perl was built using the system malloc.

      Even when perl is built to use its own malloc, all the memory is actually acquired from the OS using a call to the CRT malloc. Or in the case of Win32 and perhaps others, directly to the OS allocator for very large allocations.

      for a very large malloc size (say 200MiB) the perl process would often grow by something like 400MiB.

      That generally comes about because for statements like:

      my $x = 'x' x 200e6;

      First 200MB is allocated to construct the right-hand side, and then a further 200MB is allocated for $x and the data is copied over. On win32, the first 200MB is then immediately released back to the OS, but it does mean that you need to able to allocate double the actual requirement. Sad but true.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.