Re: die rather than exit on out-of-memory failure?
by JavaFan (Canon) on Jan 04, 2011 at 10:29 UTC
|
So, basically you want Perl to continue (because a die is an initial continuation - after all, a die can be trapped), after Perl has informed its request for more memory has failed. Note that such a failure typically happens halfway processing a Perl command; the interpreter may be in an inconsistent state (as it's currently in C-land, not Perl-land).
How do you suppose perl has to handle this? It'll be extremely limited in what it can do - it must assume it cannot allocate any more memory; which also means dropping back to Perl land is a nono (because almost anything could result in additional memory requests).
I've read about some emergency memory space that can be built into the perl but that seems to be for small items and static in nature.
I do not think the emergency memory space is so Perl will be able to continue its merry way. It isn't like an additional jerry can. It's memory that will be claimed when the process starts (so, it will run out of memory sooner). But as far as I know (can't say I know from experience), it's memory that can be used so Perl can die instead of exit. And then you may be able to trap that die. | [reply] [d/l] [select] |
Re: die rather than exit on out-of-memory failure?
by Anonyrnous Monk (Hermit) on Jan 04, 2011 at 11:34 UTC
|
As already mentioned, see also Is $^M a leftover April Fool? for lots of related discussion.
In short, the idea seems to be that when Perl runs into an "out of memory" situation, it goes on to die (which can be trapped). Immediately before that it frees the previously allocated emergency buffer (the PV behind $^M), so any exception handler would have some memory resources to do its job of cleaning up/freeing more memory. For this kind of out-of-memory handling to be active, Perl has to be built with -DPERL_EMERGENCY_SBRK (AFAICT).
Although investigation of the sources (in particular malloc.c) confirms this theory, no one seems to have been able to come up with a short snippet demonstrating the behavior... at least not with any recent version of Perl. (I haven't tried it myself so far, but out of mere curiosity I might give it a go later, if time permits.)
| [reply] [d/l] [select] |
Re: die rather than exit on out-of-memory failure?
by syphilis (Archbishop) on Jan 04, 2011 at 11:13 UTC
|
Hi Chris,
No contribution from me as regards a solution - just a couple of follow-up questions (mainly for my own edification).
The call of exit() rather than die() means that, for example, running a Perl REPL interactive shell for PDL can crash without recovery
From that, I deduce that when an OOM error occurs, the OS tells perl it has to exit(), and perl obeys. However, I had always assumed that when such an error occurred, the OS would simply kill the perl process - no opportunity for perl to perform an exit() ... or to perform anything else, for that matter. Is my assumption incorrect ? (They often are, of course.)
If perl does, in fact, exit() when an OOM error occurs, then it will first execute any END{} blocks. I don't think that helps *you* in any way, but it would enable one to verify that an OOM error causes perl to exit(). I tried to test this out myself by writing a script with an END{} block that printed something to STDOUT, and having that script generate an OOM error. Only problem was that I couldn't find a way of generating the OOM error :-( So that's my second question to the assembled monks: "What's the surefire way of generating an OOM error in a perl script ?"
Cheers, Rob | [reply] |
|
if ((p = nextf[bucket]) == NULL) {
MALLOC_UNLOCK;
#ifdef PERL_CORE
{
dTHX;
if (!PL_nomemok) {
#if defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC)
PerlIO_puts(PerlIO_stderr(),"Out of memory!\n");
#else
char buff[80];
char *eb = buff + sizeof(buff) - 1;
char *s = eb;
size_t n = nbytes;
PerlIO_puts(PerlIO_stderr(),"Out of memory during request
+for ");
#if defined(DEBUGGING) || defined(RCHECK)
n = size;
#endif
*s = 0;
do {
*--s = '0' + (n % 10);
} while (n /= 10);
PerlIO_puts(PerlIO_stderr(),s);
PerlIO_puts(PerlIO_stderr()," bytes, total sbrk() is ");
s = eb;
n = goodsbrk + sbrk_slack;
do {
*--s = '0' + (n % 10);
} while (n /= 10);
PerlIO_puts(PerlIO_stderr(),s);
PerlIO_puts(PerlIO_stderr()," bytes!\n");
#endif /* defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) */
my_exit(1); ************************************
}
}
#endif
return (NULL);
}
"What's the surefire way of generating an OOM error in a perl script ?"
This does it for me: c:\test\perl-5.13.6>perl -E"$x = chr(0)x2**31"
Out of memory!
Personally, I think that if malloc fails for a request larger than say 64k, Perl should die not exit.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
|
Perl just exits (via my_exit() in perl.c)
Thanks for that. I now get the picture.
c:\test\perl-5.13.6>perl -E"$x = chr(0)x2**31"
That doesn't generate the OOM for me on any of my perls (on both Linux and Windows). As best I can tell, the assignment fails, but there's no exit:
C:\>perl -E"$x = chr(0)x2**6;print length($x)"
64
C:\>perl -E"$x = chr(0)x2**31;print length($x)"
0
C:\>
Even with warnings switched on, the assignment simply fails silently.
UPDATE: BrowserUk was running an x64 build of perl. When I switch to any of my x64 builds, I then get the OOM error. In order to get that error with my x86 builds, it turns out I just need to run:
C:\_32>perl -e "$x=chr(0)x2**29;print length($x)"
Out of memory!
Incidentally, what's the siginificance of '-E' (as opposed to the more usual '-e') in the command ? My copy of Programming Perl (3rd edition) pre-dates the arrival of '-E', and I don't know where perl itself documents its command line switches.
Update: Duh ... 2 minutes after posting, I think of trying 'perl -h' ... and there it is:
-E program like -e, but enables all optional features
Cheers, Rob
| [reply] [d/l] [select] |
|
"Personally, I think that if malloc fails for a request larger than say 64k, Perl should die not exit".
This idea gets to the crux of the PDL malloc issue.
In most of the run out of memory scenarios for perl,
seem to assume that if you "hit the wall" in memory
for one malloc, you'll fail on the next (or soon) so
the interpreter cannot and must not try to get any
more memory. In PDL mallocs, the problematic ones,
the sizes can be upwards of 100MiB or more so the
fact that the malloc failed for such a large chunk
of memory has little to say about whether there is
more memory available in a smaller contiguous chunk.
Reviewing this discussion (and others referenced
here), it seems that what might work would be something
like a fake signal $SIG{NOMEM} (generated by the perl
interpreter) for which a user could install their own handler for the case that a memory allocation failed. While
that might work, it seems like an ugly graft onto Perl language for such an edge case.
| [reply] [d/l] |
|
|
|
|
I had always assumed that when such an error occurred, the OS would simply kill the perl process...
Not necessarily. When a malloc request fails, it simply returns NULL and sets errno to ENOMEM. The memory requesting application may do whatever it sees fit to deal with the situation.
Maybe you were thinking of the case when the OS itself is running out of memory, for which some OSes have emergency code ("OOM killer"), which sacrifices one or more processes to keep the system as a whole alive.
| [reply] [d/l] [select] |
Re: die rather than exit on out-of-memory failure?
by Anonymous Monk on Jan 04, 2011 at 11:11 UTC
|
| [reply] |
Re: die rather than exit on out-of-memory failure?
by locked_user sundialsvc4 (Abbot) on Jan 04, 2011 at 02:28 UTC
|
There is ... ahem ... a relatively simple solution to that “problem,” to wit:
Do not attempt to allocate a data-object of that size.
What you are actually doing, in such a situation, is “allocating a disk file the hard way.” All storage used by a process is, after all, &dquo;virtual storage,” and this means “disk file.” The virtual storage subsystem is not designed to handle a process hitting 200 megabytes’ worth of 4K pages all at once.
Such data should be stored in disk files. Those files can be processed in a variety of ways, such as the tie mechanism, or by mapping portions of them into the virtual-storage space, but one must never completely forget the purely physical aspects of disk storage: transfer time, rotational latency, and seek time.
| |
|
The problem is that perl exit()s rather than die()s
which make recovery on failure a bit problematic.
Reducing the memory footprint of the application
is a work around but in the case of PDL where the
perl SV data is actually an opaque object being
processed by optimized C computational kernels, it
is not "relatively simple" to implement
I'm thinking that a possible approach would be
to replace the implicit perl memory allocation with
our own calls to the system malloc routine with the
perl object reference now being controlled by magic
or some such...
I'm fine with a failure to allocate these
large objects but would like the perl interpreter
not to exit. The current pdl_malloc using perl SVs
is definitely far outside the bounds of expected
perl usages. However, one of the goals of PDL was
to make exactly such memory and computational
performance problems accessible from perl.
| [reply] |
|
Indeed.
Rather than explode the spaceship, there should be a light which says 'please do not press that malloc button again'.
It is relatively easy to generate such a large memory request in PDL even accidentally, by the nature of the language, And 200 Mb objects aren't much these days in the era of Gb memory computers.
| [reply] |
|
| [reply] |
|
| |