arthurg has asked for the wisdom of the Perl Monks concerning the following question:
Processes that grow too large ruin performance; I want a general way to prevent that. I don't mind letting an oversized process die, but I want to detect and log that event and provide some Web feedback to the user.
Unix usage limits seem like the solution. The bash shell's 'ulimit' reports on and sets limits. They're inherited by children through fork() and exec(). So, for example, I can start Apache with the script
#!/bin/sh ulimit -v 1024000 exec /usr/sbin/apachectl restart
and limit each process' virtual memory to 1 GB. But detecting and handling the death of a process that tries to allocate more memory seems hard. There are some hints in documentation (http://search.cpan.org/~gozer/mod_perl-1.29/mod_perl_traps.pod#Perl_Modules_and_Extensions and http://www.perl.com/doc/manual/html/pod/perlvar.html) that by
1) compiling Perl with -DPERL_EMERGENCY_SBRK (run "./Configure -DPERL_EMERGENCY_SBRK -des -Dprefix=~/localperl" before make) (It seems that the code between lines 1130 and 1279 of "malloc.c" run if PERL_EMERGENCY_SBRK is defined.)
2) and defining $SIG{__DIE__} = \&deathHandler;
one can run a little code in deathHandler before finally dying.
Does this make sense? Will it work?
Thanks
A
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Monitoring the death of oversized Perl processes
by WizardOfUz (Friar) on Mar 06, 2010 at 10:06 UTC | |
Re: Monitoring the death of oversized Perl processes
by zwon (Abbot) on Mar 06, 2010 at 07:36 UTC | |
by JavaFan (Canon) on Mar 06, 2010 at 08:35 UTC | |
by zwon (Abbot) on Mar 06, 2010 at 09:37 UTC | |
by JavaFan (Canon) on Mar 06, 2010 at 15:45 UTC | |
by zwon (Abbot) on Mar 06, 2010 at 16:24 UTC | |
Re: Monitoring the death of oversized Perl processes
by ahmad (Hermit) on Mar 06, 2010 at 15:15 UTC | |
by arthurg (Acolyte) on Mar 06, 2010 at 23:22 UTC |