mikeo has asked for the wisdom of the Perl Monks concerning the following question:

First off, I'm a newbie. I started learning the language a week ago. My problem might start there. Here's the situation: I've got a 470 (with documentation) line script that works absolutely flawlessly. No problems at all. The problems occur when I wrap the subroutine calls in a loop to repeat the script every x seconds until I kill the thread. As soon as I do this, the FileHandle that I throw my output at can no longer handle as much information and just stops listening. The information thrown at it just flies off into nothingness. Here's what the subroutine call section looks like:
$TIMER = 10; while (TRUE) { &initialScreenOut; &beginFileIO; &htmlHeader; &timeStamp; &legend; my @PORTS = ("24","26","26","26"); my @BASE = ("0x0000F6AE2D41","0x0000F6BC4401","0x0000F6BC3B41","0x0000 +F6BC5C01"); &makeTable ( "134.200.212.109","DeskStream1", 4, 26, \@BASE, \@PORTS); my @PORTS2 = ("24","24","26"); my @BASE2 = ("0x0000F6D83301","0x0000F6E2E901","0x0000F64C09C1"); &makeTable ( "134.200.212.190","DeskStream2", 3, 26, \@BASE2, \@PORTS2 +); my @PORTS3 = ("26","24","24"); my @BASE3 = ("0x0000F64C2701","0x0000F64C2281","0x0000F64C3981"); &makeTable ( "134.200.212.189","DeskStream3", 3, 26, \@BASE3, \@PORTS3 +); &htmlFooter; &finalScreenOut; sleep($TIMER); }
When I comment out the first two lines of this code and the last two lines of this code (effectively, all of the code that handles the looping), the script works fine. When I leave them in, I get the apparent buffer overflow problems with the FileHandle. Anybody have any idea what's going on?

Replies are listed 'Best First'.
Re: Buffer overflow
by PrakashK (Pilgrim) on Jul 07, 2001 at 01:53 UTC
    You seem to be calling several subroutines in your script. It is difficult to know what they might be doing without the complete code.

    Here's my guess: You have a memory leak (perhaps from using global variables), or if you are opening files and not closing them, you will eventually reach the max limit on file descriptors.

    Try calling your perl script from a shell script loop:

    #!/bin/ksh TIMER=10 while : do perl script.pl sleep $TIMER done
    If that is not an option, you can also create another perl script that uses 'do' function to call this script:
    my $TIMER = 10; while (1) { do "/path/to/your/script.pl"; sleep $TIMER; }
    /prakash
      Thanks for the quick response PrakashK, but I forgot to mention that I had tried your second option. The results of that made me even more confused as to what was going on. If I call the script from the command line, it will work fine. But it gives me the "overflow" problems if I call it from a file like this:
      #!/usr/bin/perl my $TIMER = 10; $TEST = 1; while ($TEST == 1) { do "mrtgnav.pl"; sleep $TIMER; $TEST = 1; }
      Note the $TEST variable. When I run what you see above, it has the problem. But the program works when I try and fool it like this:
      #!/usr/bin/perl my $TIMER = 10; $TEST = 1; while ($TEST == 1) { do "mrtgnav.pl"; sleep $TIMER; $TEST = 0; }
      Help. My head is starting to hurt...

        I expect the reason it give you no problems the second way is that you're only running it once, sleeping for 10 seconds, and exiting (you exit the loop on the first time through).

        I think you and PrakashK are both under a slightly misapprehension as to what do does: it "does" the code in a file you specify, but it does it within the name space of the currently running script (and it does the same thing every time you do it). This is very useful for importing configuration variables, but not very useful for avoiding memory leaks.

        I suspect, therefore, that if you try substituting system for do, it will solve that problem, giving you an acceptable work-around for running the script repeatedly.

        It will not solve your overall problem, however: most likely that is caused by the use of global variables in a non-scalable manner, as indicated by synapse0 and PrakashK. This can be solved by making your variables lexically scoped, using my (this makes it sound slightly simpler than it is). I'd explain why this is, but Dominus explains it so much better in Coping with Scoping (TPJ) that I'll just recommend that you read that instead. :-)

        If you're modifying globals in your subroutines (which it seems very likely you are), don't do that! :-) Instead, pass values to them explicitly, and give them useful return values. If you write subs this way, casual readers like us (and you, two months from now) will be able to figure out which variables they modify, and mysterious bugs will be less likely to happen.

        I strongly suggest using use strict; at the top of every large script you write (and the small ones too, just to keep in practice). Again, I'd explain exactly why (it has to do with those lexically scoped variables), but tachyon wrote a node precisely to avoid that duplication of effort, so I'll just point you to it: Use strict warnings and diagnostics or die. (Though if you're new to Perl I should perhaps note that he doesn't mean that last part entirely literally...)

        Sorry for the somewhat scattershot nature of this reply--sometimes that's all my brain will produce.



        If God had meant us to fly, he would *never* have give us the railroads.
            --Michael Flanders

Re: Buffer overflow
by synapse0 (Pilgrim) on Jul 07, 2001 at 02:06 UTC
    Another possibility might lie in the way you're calling your subs.
    when you call a sub using &sub_name, it passes @_ in a global fassion, meaning any changes to @_ are affected throughout the script. My explanation may be slightly off on that, but if you're going to call your subs using & with no args, it's safer to pass an empty set to it using ().. i.e. call &sub_name() rather than &sub_name
    Without seeing the code, i dont' know if that's the case or not, but i do know you'll eventually run into problems calling subs like that.
    -Syn
Re: Buffer overflow
by MZSanford (Curate) on Jul 07, 2001 at 13:18 UTC
    Assuming the subs do whats thier names suggest, i believe the problem is that you run the finalScreenOut sub within the loop. I would think that from it's name, it is possibly calls close(). After the close you would get messages about printing to a closed filehandle if you are running under -w. Not sure, would need to see the code, but sounds like it could be right.
    may the foo be with you