I read it through and It appears that seperating the memory consuming part into another script then exec it is one solution. but I don't see the memory gets released by doing it.
I am using Parallel::ForkManager in the file1.pl, btw.
file1.pl
use Parallel::ForkManager;
my $pm = new Parallel::ForkManager(4);
foreach my $base_dir (@base_dirs) {
my $storable_file = $storable_dir.$f;
$pm->start and next; # do the fork
`/home/qiang/file2.pl $storable_file $base_dir`;
$pm->finish;
}
$pm->wait_all_children;
file2.pl
use File::Find;
use Storable qw(store retrieve);
my $storable_file = shift;
my $base_dir = shift;
my @usr_dirs = map {chomp;$base_dir."/".$_} `/bin/ls $base_dir`;
my $mbox = {};
find(\&wanted,@usr_dirs);
store $mbox, $storable_file or die "Can't store !\n";
|