Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

mod_perl and lazy zombies

by Odud (Pilgrim)
on Jun 07, 2001 at 18:22 UTC ( [id://86557]=perlquestion: print w/replies, xml ) Need Help??

Odud has asked for the wisdom of the Perl Monks concerning the following question:

Running under mod_perl/Apache I have the following snippet
open(RPC,"/bin/rpcinfo -p foo|") or die ... while(<RPC>) { # do something with the output }
Because of laziness I didn't bother to close (RPC); but I noticed that zombies for sh and rpcinfo were appearing. They went away when I added the close.
I guess that it is expected given that the purpose of mod_perl is to keep the module loaded but I was surprised not to have come across this anywhere in the documentation.
Can someone confirm that this is expected behaviour - if not perhaps it shows that something is wrong with my mod_perl/Apache configuration.

Replies are listed 'Best First'.
Re: mod_perl and lazy zombies
by andreychek (Parson) on Jun 07, 2001 at 18:53 UTC
    I ran across this in the mod_perl guide:

    When you write a script running under mod_cgi, you can get away with sloppy programming, like opening a file and letting the interpreter close it for you when the script had finished its run...
    ...
    For mod_perl, before the end of the script you must close() any files you opened!
    ...
    If you forget to close(), you might get file descriptor leakage and (if you flock()ed on this file descriptor) also unlock problems.

    Later on in that same section, they actually go as far as recommending
    you use IO::File to do work with files under mod_perl in case the
    interpreter stops before getting to run the close statement (ie, the user
    presses the stop button in their browser).
    -Eric
      I must have missed that somewhere in the 666 pages! do you have the section number? Also do you think "sloppy" eq "lazy"?
      Update I didn't look at your reference properly - I see it's in section 9.33 File handlers and locks leakages - I still might take exception to "sloppy" though
      Pete
        Heh.. well, don't feel bad, I can't say I found it on
        my first try either :-)

        As far as it being sloppy programming -- perhaps that is
        a bit strong of a word, but the guy who has to maintain your
        code when you are gone would most likely enjoy knowing when
        you are done using a particular file :-) Or even if you
        go to work on it later on, having that close statement
        would clearly state that you are finished with the filehandle,
        and might make your life a bit easier.

        But, as a Perl programmer, it's you're absolute right to
        code it that way if thats how you want to and you feel it
        makes you're life easier :-)
        -Eric
Re: mod_perl and lazy zombies
by no_slogan (Deacon) on Jun 07, 2001 at 18:48 UTC
    Yes, that's the correct behavior for mod_perl. The RPC filehandle is a global, and global variables are persistant. Sometimes, you might want your files to stay open (e.g. persistant database connections). Other times, it's a good idea to use localized filehandles with a gensym or some other trick, so they don't get left open.
    use Symbol; my $rpc = gensym(); open $rpc, ...
      Good point - of course the filehandles will be global, but I hadn't spotted that until you pointed it out.
Re: mod_perl and lazy zombies
by gildir (Pilgrim) on Jun 07, 2001 at 19:11 UTC
    With open(FH,"....|") perl in fact does not open a file, but a pipe to a newly created process. When this process exits, it stays as a zombie as long as parent process does not collects its exit staus. In C this is done by calling wait() or waitpid() function. Perl gives you an abstraction for accessing processed as files, but the final collection of child's exit status must be done anyway and it is done in the close function.

    There is no direct or easy way of mod_perl doing this for you (except for some playing with SIGCHLD handler, but there you will be a subject to filehandle leakage as well). So just call close when it is appropriate.

    To allway call close on everything that was open is a good practice in any environment not only in mod_perl. But as mod_perl processes are long-running, this bad practice pops up as a problem.

Re: mod_perl and lazy zombies
by mugwumpjism (Hermit) on Jun 07, 2001 at 19:07 UTC

    If you really don't care about the return code, you could set:

    $SIG{CHLD} = "IGNORE";

    And you should not have to worry about the zombies on most platforms. See the <cite>perlipc(1p)</cite> man page and search for CHLD for a little discussion on that.

      Yes and no.

      Yes. You get rid of zombies on some systems, but you should rather use

      use POSIX ":sys_wait_h"; $SIG{CHLD} = sub { my $kid; do { $kid = waitpid(-1,&WNOHANG); } until $kid == -1; }
      This will collect all the zombies immediatly as they become 'undead'.

      No. This will not solve the problem of missing close. When open(FH,'...|') is called, pipe is created as a pair of file descriptors. These fil desrciptors will not be reused until explicitly closed by close syscall. Therefore if your program will serve thousands of requests in one process you will run out of file descriptors. One possible solution to this is to specify MaxRequestsPerChild 200 Apache configuration directive to limit one server process to serve only 200 requests before it dies. But the whole idea of long-running mod_perl precesses is somehow hindered by this.

      I wasn't too worried about them - after all the numbers weren't increasing - it was just odd to see them appear on a previous clean machine. I've always felt that in general they are Not A Good Thing to have and a sign that something odd may be happening.
      Pete

        All Zombie process are are child processes that have exited, but their parent hasn't called wait() yet to see what the child's return code was. They don't consume any resources except a process table entry.

        When the child exits/dies, the parent gets sent SIGCHLD to notify it. If it doesn't call wait() in that signal handler, the child stays a zombie until wait() is called. If you haven't defined a signal handler, the default action will be to leave it as a zombie - after all, how does the OS know that you're not going to want that return code later?

        Library calls like system() do the wait() for you. popen doesn't. That's what close FH is effectively doing on a popen'ed filehandle.

Re: mod_perl and lazy zombies
by dirthurts (Hermit) on Jun 08, 2001 at 11:21 UTC
    Would calling local on the filehandle do any good?

    Like so:

    local (*RPC); open(RPC,"/bin/rpcinfo -p foo|") or die ... while(<RPC>) { # do something with the output }
      No - I tried it and the zombies appear. Even though it is now local because I'm running under mod_perl the local variable stays in existance because there has been no exit from the enclosing block.
      Pete
        so :
        { local(*RPC); #... }
        should be ok ? Perhaps it's the same than using IO::File. Anyway, I prefer the second solution : IO::File :)
(Odud): mod_perl and lazy zombies - a summary
by Odud (Pilgrim) on Jun 08, 2001 at 17:15 UTC
    To be brief - yes this is what you would expect to happen. Because the open creates the filehandle and it isn't closed and because mod_perl is keeping the code loaded it doesn't disappear - so the children (i.e. the processes needed to provide the data) aren't reaped and so become zombies.

    So the answer in this case is not to be lazy (or sloppy) and do an explicit close.

    The number of zombies doesn't grow for ever - what you seem to get is (number of httpd processes X number of open calls). In my case this was 9 calls X 4 processes hence my concern at sudenly seing 36 zombies appear.
    Subsequent reuse of each process seems to kill off the existing zombies belonging to the process and then create a replacement set.

    If you read the (massive) mod_perl guide it does tell you to do this - but in my haste I didn't spot it.

    Thanks for all the (analysis|help|advice)

    Odud

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://86557]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (2)
As of 2024-04-16 21:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found