arthurg has asked for the wisdom of the Perl Monks concerning the following question:

I'm running Perl programs in mod_perl in Apache (2.2) on RHEL.

Processes that grow too large ruin performance; I want a general way to prevent that. I don't mind letting an oversized process die, but I want to detect and log that event and provide some Web feedback to the user.

Unix usage limits seem like the solution. The bash shell's 'ulimit' reports on and sets limits. They're inherited by children through fork() and exec(). So, for example, I can start Apache with the script

#!/bin/sh ulimit -v 1024000 exec /usr/sbin/apachectl restart

and limit each process' virtual memory to 1 GB. But detecting and handling the death of a process that tries to allocate more memory seems hard. There are some hints in documentation (http://search.cpan.org/~gozer/mod_perl-1.29/mod_perl_traps.pod#Perl_Modules_and_Extensions and http://www.perl.com/doc/manual/html/pod/perlvar.html) that by

1) compiling Perl with -DPERL_EMERGENCY_SBRK (run "./Configure -DPERL_EMERGENCY_SBRK -des -Dprefix=~/localperl" before make) (It seems that the code between lines 1130 and 1279 of "malloc.c" run if PERL_EMERGENCY_SBRK is defined.)

2) and defining $SIG{__DIE__} = \&deathHandler;

one can run a little code in deathHandler before finally dying.

Does this make sense? Will it work?

Thanks

A

Replies are listed 'Best First'.
Re: Monitoring the death of oversized Perl processes
by WizardOfUz (Friar) on Mar 06, 2010 at 10:06 UTC
Re: Monitoring the death of oversized Perl processes
by zwon (Abbot) on Mar 06, 2010 at 07:36 UTC

    In order PERL_EMERGENCY_SBRK to have effect you should use Perl's malloc, which doesn't look like a good idea to me. Also, it will affect only Perl code. If apache code exceed limit, deathHandler will not be invoked. Also, if limit will be exceeded during automatic stack expansion, your process will be killed with SIGSEGV and again deathHandler will not be invoked. So I don't think this is a good solution.

      In order PERL_EMERGENCY_SBRK to have effect you should use Perl's malloc, which doesn't look like a good idea to me.
      And you make this claim based on what exactly? From the install file:
      =head2 Malloc Issues
      
      Perl relies heavily on malloc(3) to grow data structures as needed,   
      so perl's performance can be noticeably affected by the performance of
      the malloc function on your system.  The perl source is shipped with a
      version of malloc that has been optimized for the typical requests from
      perl, so there's a chance that it may be both faster and use less memory
      than your system malloc.
              
      However, if your system already has an excellent malloc, or if you are
      experiencing difficulties with extensions that use third-party libraries
      that call malloc, then you should probably use your system's malloc.
      (Or, you might wish to explore the malloc flags discussed below.)
      
      =over 4
      
      =item Using the system malloc
      
      To build without perl's malloc, you can use the Configure command
      
              sh Configure -Uusemymalloc
      
      or you can answer 'n' at the appropriate interactive Configure prompt.
      
      Note that Perl's malloc isn't always used by default; that actually
      depends on your system. For example, on Linux and FreeBSD (and many more
      systems), Configure chooses to use the system's malloc by default.
      See the appropriate file in the F<hints/> directory to see how the
      default is set.
      
        And you make this claim based on what exactly?
        Which claim?
        • In order PERL_EMERGENCY_SBRK to have effect you should use Perl's malloc
        • doesn't look like a good idea to me
Re: Monitoring the death of oversized Perl processes
by ahmad (Hermit) on Mar 06, 2010 at 15:15 UTC

    Make a script that run once every minute, it will have to parse the output of top -c -n 1 command and kill the scripts that uses too much memory and log some useful info in some file

      Interesting, but not reliable. By the time a 'KillerScript' runs an oversized process could freeze the system so that top wouldn't run. In any case, an application should be able to control it's children processes more directly.