cLive ;-) has asked for the wisdom of the Perl Monks concerning the following question:
Hi All,
I'm now working on a daemon and am trying to improve garbage collection. Until now, I was under the impression that collection was completely automatic, but this quick test says otherwise (on Linux):
#!/usr/bin/perl use strict; use warnings; show_size(); { my @var=(0..1000000); show_size(); } show_size(); exit(0); sub show_size { local $/; open(my $pfh, '<', "/proc/$$/status") || die $!; my $size = <$pfh> =~ /VmSize:\s+(\d+)/ ? $1 : 'unknown'; close($pfh); print "Process size: $size\n"; } # Output is # Process size: 49956 # Process size: 53864 # Process size: 53864
But, if I add in some explicit undefs, the result changes:
#!/usr/bin/perl use strict; use warnings; show_size(); { my @var=(0..1000000); show_size(); undef @var; } show_size(); exit(0); sub show_size { local $/; open(my $pfh, '<', "/proc/$$/status") || die $!; my $size = <$pfh> =~ /VmSize:\s+(\d+)/ ? $1 : 'unknown'; close($pfh); print "Process size: $size\n"; undef $pfh; undef $size; } # Output is # Process size: 49660 # Process size: 53568 # Process size: 49660
I was under the impression that when a variable went out of scope it would get cleaned up by Perl's automatic garbage collection. But if that's the case, why does the process size not go down after @var falls out of scope without the explicit undef? Am I misunderstanding how garbage collection works? A colleague says that Perl will free up the memory for re-use, but won't let it go. Is this how it really works?
Either way, what is the best practice for controlling memory use in a daemon setup? Any insights appreciated.
cLive ;-)
|
|---|