in reply to die rather than exit on out-of-memory failure?

Hi Chris,
No contribution from me as regards a solution - just a couple of follow-up questions (mainly for my own edification).

The call of exit() rather than die() means that, for example, running a Perl REPL interactive shell for PDL can crash without recovery

From that, I deduce that when an OOM error occurs, the OS tells perl it has to exit(), and perl obeys.
However, I had always assumed that when such an error occurred, the OS would simply kill the perl process - no opportunity for perl to perform an exit() ... or to perform anything else, for that matter. Is my assumption incorrect ? (They often are, of course.)

If perl does, in fact, exit() when an OOM error occurs, then it will first execute any END{} blocks.
I don't think that helps *you* in any way, but it would enable one to verify that an OOM error causes perl to exit(). I tried to test this out myself by writing a script with an END{} block that printed something to STDOUT, and having that script generate an OOM error. Only problem was that I couldn't find a way of generating the OOM error :-(
So that's my second question to the assembled monks: "What's the surefire way of generating an OOM error in a perl script ?"

Cheers,
Rob
  • Comment on Re: die rather than exit on out-of-memory failure?

Replies are listed 'Best First'.
Re^2: die rather than exit on out-of-memory failure?
by BrowserUk (Patriarch) on Jan 04, 2011 at 11:57 UTC
    I had always assumed that when such an error occurred, the OS would simply kill the perl process... Is my assumption incorrect ?

    Actually, the OS is't involved. Perl just exits (via my_exit() in perl.c):

    if ((p = nextf[bucket]) == NULL) { MALLOC_UNLOCK; #ifdef PERL_CORE { dTHX; if (!PL_nomemok) { #if defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) PerlIO_puts(PerlIO_stderr(),"Out of memory!\n"); #else char buff[80]; char *eb = buff + sizeof(buff) - 1; char *s = eb; size_t n = nbytes; PerlIO_puts(PerlIO_stderr(),"Out of memory during request +for "); #if defined(DEBUGGING) || defined(RCHECK) n = size; #endif *s = 0; do { *--s = '0' + (n % 10); } while (n /= 10); PerlIO_puts(PerlIO_stderr(),s); PerlIO_puts(PerlIO_stderr()," bytes, total sbrk() is "); s = eb; n = goodsbrk + sbrk_slack; do { *--s = '0' + (n % 10); } while (n /= 10); PerlIO_puts(PerlIO_stderr(),s); PerlIO_puts(PerlIO_stderr()," bytes!\n"); #endif /* defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) */ my_exit(1); ************************************ } } #endif return (NULL); }
    "What's the surefire way of generating an OOM error in a perl script ?"

    This does it for me:

    c:\test\perl-5.13.6>perl -E"$x = chr(0)x2**31" Out of memory!

    Personally, I think that if malloc fails for a request larger than say 64k, Perl should die not exit.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Perl just exits (via my_exit() in perl.c)

      Thanks for that. I now get the picture.

      c:\test\perl-5.13.6>perl -E"$x = chr(0)x2**31"

      That doesn't generate the OOM for me on any of my perls (on both Linux and Windows). As best I can tell, the assignment fails, but there's no exit:
      C:\>perl -E"$x = chr(0)x2**6;print length($x)" 64 C:\>perl -E"$x = chr(0)x2**31;print length($x)" 0 C:\>
      Even with warnings switched on, the assignment simply fails silently.

      UPDATE: BrowserUk was running an x64 build of perl. When I switch to any of my x64 builds, I then get the OOM error. In order to get that error with my x86 builds, it turns out I just need to run:
      C:\_32>perl -e "$x=chr(0)x2**29;print length($x)" Out of memory!
      Incidentally, what's the siginificance of '-E' (as opposed to the more usual '-e') in the command ?
      My copy of Programming Perl (3rd edition) pre-dates the arrival of '-E', and I don't know where perl itself documents its command line switches.

      Update: Duh ... 2 minutes after posting, I think of trying 'perl -h' ... and there it is:
      -E program like -e, but enables all optional features
      Cheers,
      Rob

      "Personally, I think that if malloc fails for a request larger than say 64k, Perl should die not exit".

      This idea gets to the crux of the PDL malloc issue. In most of the run out of memory scenarios for perl, seem to assume that if you "hit the wall" in memory for one malloc, you'll fail on the next (or soon) so the interpreter cannot and must not try to get any more memory. In PDL mallocs, the problematic ones, the sizes can be upwards of 100MiB or more so the fact that the malloc failed for such a large chunk of memory has little to say about whether there is more memory available in a smaller contiguous chunk.

      Reviewing this discussion (and others referenced here), it seems that what might work would be something like a fake signal $SIG{NOMEM} (generated by the perl interpreter) for which a user could install their own handler for the case that a memory allocation failed. While that might work, it seems like an ugly graft onto Perl language for such an edge case.

        I guess the problem with trying to fix this, would be addressing all the places in the rest of the codebase that have long been coded on the assumption that if malloc(), or whichever of the myriad wrappers is used to call it, returns, then the requested memory was available. Though I feel pretty certain I've seen plenty of code that checks the return from Newxx() etc. That said, I wouldn't expect there to be many places where large contiguous chunks of memory are allocated.

        As is, the only pragmatic step the PDL authors might make would be to try calling the OS memory allocator directly for large allocations first. If the OS says okay, then give that memory back to the OS and immediately call Perl's malloc() for it. The window of cases when, the OS says yes and Perl no, should be pretty small. But that would still require action by the authors of PDL and any other similar modules that routinely allocate and manipulate large contiguous chunks of ram.

        Perhaps the simplest solution would be a new module that a user program can call to check whether the process will be able to satisfy a particular allocation request. Say Devel::MemCheck::memCheck().


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^2: die rather than exit on out-of-memory failure?
by Anonyrnous Monk (Hermit) on Jan 04, 2011 at 12:03 UTC
    I had always assumed that when such an error occurred, the OS would simply kill the perl process...

    Not necessarily.  When a malloc request fails, it simply returns NULL and sets errno to ENOMEM. The memory requesting application may do whatever it sees fit to deal with the situation.

    Maybe you were thinking of the case when the OS itself is running out of memory, for which some OSes have emergency code ("OOM killer"), which sacrifices one or more processes to keep the system as a whole alive.