in reply to Is $^M a leftover April Fool?

About 20 years ago when I was programming early Macs, this kind of feature was used since the memory size was small and there was no swap file. I think it was called a "rainy day fund", or something else just as quaint.

A memory allocation error probably means that most likely (1) you've been leaking memory for a long time and you asked for a little bit more; (2) you passed in a garbage value (something huge) to the allocator; or (3) the heap is corrupt. With today's cavernous virtual memory, 1 is much less common and can usually be detected long before it happens so this feature wouldn't be useful there. 2 is probably pretty rare, but it might help there. With 3, it could even be dangerous to continue (e.g. unflushed open disk files) since you'll be touching the heap at least once and it's already bad.

Replies are listed 'Best First'.
Re^2: Is $^M a leftover April Fool?
by BrowserUk (Patriarch) on Jan 06, 2005 at 23:26 UTC

    I'm not saying that the feature couldn't be useful.

    I am only saying that as it is currently implemented, from my best efforts to understand it, the only time it willl ever be utilised is if in the process of running a perl program, the interpreter attempts to allocate more memory and gets a fail from alloc/malloc/calloc.

    At that point, the perl interpreter may attempt to invoke Perl_croak() in order to report this fatal error.

    As Perl_croak() is a wrapper around sprintf(), it may need to allocate a buffer into which to format the error text, and if the Perl programmer has had the foresight to preallocate a reserve to $^M, then Perl_croak() may be able to allocate that buffer when it might otherwise have failed.

    There are two things wrong with that though.

    • As far as my exploration of the source has gone, the only text that is ever issued when a memory allocation attempt fails, is "Out of memory".

      As far as I am aware, there is never any attempt to relate the error back to a line in the Perl source that instigated the memory allocation that failed. It would probably be near impossible to do that.

      In the absence of any variable information, the need to allocate memory in order to die seems minimal.

    • Even if the message produced at the point memory allocation failed did contain variable information (line numbers, traceback, whatever), it would surely be possible to hardcode a preallocated buffer space of 256 or 512 bytes for this purpose and remove the need for the Perl programmer to do that preallocation?

      Even in an embedded system, that sort of buffer is negligable. Years ago, when I needed a reliable buffer in which to construct fatal error traceback information in a highly, memory constrained environment (a device driver that could never occupy more than one 64k segment), the solution I adopted was to re-use the constant string table for this purpose.

      In a process that occupies at least 1.5 MB of ram minimum, given that this buffer is, by definition, only used just prior to the process terminating, there has to be a suitable sized chunk of memory that can be re-used for this purpose somewhere?

    I'm also saying that given that the Perl runtime will know that it is utilising the $^M reserve--if one has been established--it would be possible for it to inform the Perl script, perhaps via one of the unused signal values, and give it a chance to attempt some form of cleanup and/or reporting, prior to the final act.

    But mostly I am saying, given the current state of the documentation of this feature, I seriously wonder how many people have ever attempted to use it. And of those that did, how many saw some benefit from it?

    So the program managed to issue the "Out of memory" message and self-terminate, rather than segfault and produce a core dump. But did that actually benefit anyone? Did it save the day? Prevent a greater malady? Help the programmer track down the cause?


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.