Re (tilly) 1: any way to control a memory leak
by tilly (Archbishop) on Jan 28, 2002 at 19:53 UTC
|
When I had to deal with a problem like this I found the information from Devel::Leak rather useful. For tracking down objects I hacked up something like this to use to find where I was allocating objects that never went away:
package Leaker;
use Data::Dumper;
$Data::Dumper::Indent = 1;
%CLASS;
*CORE::GLOBAL::bless = sub {
my $obj = shift;
my $class = shift || caller();
if (exists $CLASS{$obj}) {
delete $CLASS{$obj};
}
$obj = CORE::bless($obj, $class);
if ($class =~ /ObjClass/ or (caller())[1] =~ /file|names|here/) {
# print "$obj created\n";
$CLASS{$obj} = join "\n\t", map {join ":", caller($_)} 1;
}
$obj;
};
sub UNIVERSAL::DESTROY {
my $obj = shift;
if (exists $CLASS{$obj}) {
# print "$obj destroyed\n";
delete $CLASS{$obj};
}
}
sub Leaked {
print Dumper(\%CLASS);
}
1;
It worked for me. That won't work for classes which have a DESTROY method of their own. (You could just choose to replace those methods on the fly with a wrapper, much like Dominus does with Memoize.) That also won't help much if your memory leak is at the C level. (Which it well could be.)
If all else fails you can just restart yourself with:
exec($0, @ARGV);
But personally I would be strongly inclined to have the process hold a lock while it is running, and then have a periodic cron job that starts the job if it is not currently running. A big advantage of that approach is that if you have an unexpected shutdown of the process (eg a machine reboot) then nobody has to remember to start your process again. (This is a classic Unix mistake. Someone writes a daemon that is key to their process, launches it, doesn't document, then 6 months later there is a reboot and nobody knows how to get it running again.) | [reply] [d/l] [select] |
|
|
I would like to try your code out but am a little confused
on how to use it? below is an oversimplification of code I am using
for debuging? how would I implement your example in this case.
Thanks Robert
#!/bin/perl
use strict 'vars';
use Utils;
while(1){
my $utlp = new Utils();
$utlp->connect();
$utlp->Log("database connected.\n");
$utlp->disconnect();
$utlp->Log("database disconnected.\n");
sleep(5);
}
| [reply] [d/l] |
|
|
I would suggest starting with Devel::Leak. What I offered is mostly for use once you know that you are leaking Perl objects, in a class without a DESTROY, and you need to figure out where. In which case I would use it like this:
#!/bin/perl
use strict 'vars';
use Leaker;
use Utils;
%Leaker::CLASS = ();
while(1){
Leaker::Leaked();
%Leaker::CLASS = ();
my $utlp = new Utils();
$utlp->connect();
$utlp->Log("database connected.\n");
$utlp->disconnect();
$utlp->Log("database disconnected.\n");
sleep(5);
}
And then I would run setting different filters in the logging done in the overidden bless in Leaker to narrow things down.
It won't work if the leaked things aren't objects, if they are created at an XS level, or if they have their own DESTROY method. (It would be possible to hack around the latter issue with some Perl wizardry, and I pointed you at example source-code you could find the necessary technique in.)
For me it was useful because I was leaking a big complicated object which had lots of other objects in it. I didn't care to know details about all of the components, I just wanted to figure out why the base object was not going away. It may or may not be useful for you.
I would recommend trying Devel::Leak first. | [reply] [d/l] |
Re: any way to control a memory leak
by perrin (Chancellor) on Jan 28, 2002 at 20:19 UTC
|
You probably don't have a "leak" in the traditional C programing sense. More likely, you have some lexical variables that you're putting large amounts of data into. Lexicals do go out of scope, but Perl holds onto their memory as an optimization in case they need it again. This means that if you stuff 32K into a lexical, it will stay 32K (actually even bigger) for the life of your program.
One thing you can try is an explicit undef on any lexicals holding large amounts of data. This is supposed to release the memory back to the general pool for Perl (although not back to the OS on most systems). Also, make sure you are using references whenever passing around large chunks of data to subroutines.
You may be having problems because of your DBD driver. I've seen some versions of DBD::Oracle exhibit bad behavior in terms of memory, mostly due to the constantly changing OCI libraries they depend on. You might want to try stripping out the DBI stuff (or replacing it with dummy calls) and seeing if that makes your memory problem go away. There are configuration parameters you can tweak if this is the problem. | [reply] |
Re: any way to control a memory leak
by joealba (Hermit) on Jan 28, 2002 at 19:44 UTC
|
Can you give us a little more information?
How exactly are you 'monitoring' the databases?
Are you using strict?
Is your monitoring code running in its own block, with the variables lexically scoped in that block (with my)? That'll make sure that your variables go away when you reach the end of the block.
| [reply] [d/l] [select] |
|
|
the only thing we are doing to the databases from this script is connecting
and then disconnecting.
Yes I am using strict 'vars'?
Yes code is running in it own block and I declare the vars using my? I believe
strict for the most part checks for this - please correct me if I am wrong?
Thanks much Robert
| [reply] |
Re: any way to control a memory leak
by talexb (Chancellor) on Jan 28, 2002 at 19:54 UTC
|
Hmm, odd that this script has to run 7/24 -- couldn't you get the same effect by running it every ten or fifteen minutes, and have it do a couple of sleep ( 60 ); inside the code?
As a way of testing out your problem, I would duplicate the script and comment out some of the database code (and run it against a copy of production), just to see where this memory loss is occurring.
--t. alex
"Of course, you realize that this means war." -- Bugs Bunny.
| [reply] [d/l] |
Re: any way to control a memory leak
by Stegalex (Chaplain) on Jan 28, 2002 at 20:04 UTC
|
Oracle can be pretty buggy. I had a similar situation with my 8i setup.
First, I am assuming that you are doing an $sth->finish on your cursors when you are done with them. If not, please do. Next, do a select * from v$open_cursors to see if you are hanging on to any cursors (the symptom is that it gobbles up CPU and Memory). I am still unable to figure out why one of my scripts causes cursors to stay open even after explicitely closing them AND destroying the $dbh when finished.
If this is happening on Oracle 8i, you should probably consider downgrading to Oracle 8.0.
I like chicken. | [reply] [d/l] |
Re: any way to control a memory leak
by virtualsue (Vicar) on Jan 28, 2002 at 22:33 UTC
|
If you are doing this on unix, you
should consider letting cron handle the daemon
portion of this task. This way, all your problems (and a lot
of extra code/work) just go away.
Just strip down your script to do one pass at the monitoring,
then let it exit. Set up a crontab entry which
will run your script at specified intervals. For example,
to run your monitoring script every 2 minutes 7x24:
% crontab -e
*/2 * * * * /path/to/monitor_script.pl >>/tmp/mon_logfile 2>&1
| [reply] [d/l] [select] |
Re: any way to control a memory leak
by jlongino (Parson) on Jan 28, 2002 at 22:00 UTC
|
First let me say that you should follow tilly's advice to
solve the problem. Having said that, you can
temporarily write a monitoring process launched via
cron to kill/restart the process if it reaches a threshold.
Check out Re: Re: HTTP Daemonology for details.
Notes:
- The code (mine) is not optimized and can be improved.
- Again, as tilly states, you should document the use
of the cron job.
--Jim | [reply] |
Re: any way to control a memory leak
by dws (Chancellor) on Jan 28, 2002 at 23:20 UTC
|
I have a script that runs 7 days a week 24 hours a day monitoring oracle databases.
In years past Oracle client libraries were notoriously leaky. In Oracle 7 it was possible to leak large chunks of memory simply by repeatedly connecting and disconnecting from the database. I've heard third-hand reports of leaks with Oracle 8 libraries, but have no first-hand knowledge.
Try this: To rule out Oracle as the culprit, set up a script that does nothing but connect and disconnect, and note its memory profile over a few hours of running.
Next, set up a script that repeatedly issues the same queries in the same that your production script does, but without storing any results in Perl data structures (this should rule out your code as the source of the leaks). If this doesn't demonstrate a leak, the culprit is your code. If it does show a leak, try some of the advice above (finish your queries, etc.)
Oh, and verifying that you're running the latest Oracle client libraries is usually a good idea.
| [reply] |
Re: any way to control a memory leak
by Biker (Priest) on Jan 28, 2002 at 22:20 UTC
|
Are you pushing scalars on an array? Do you also re-use the array without letting it go out of scope inbetween?
Unless you also pop that array, you should 'empty' it.
$#my_array=-1;
(Been bitten there myself. ;-)
"Livet är hårt" sa bonden.
"Grymt" sa grisen...
| [reply] [d/l] |