what could be causing the difference in memmory utilisation in these two servers
You don't specify what the wanted subroutine is doing, and whether the two servers have similar characteristics in terms of what you are looking for.
If you are looking for files and pushing them onto an array, and on one server you have lots of matches and on the other you have very few then you have a good explanation as to why the process on one server is so much larger.
If this is indeed the case, then you need to adopt a different strategy. One thing I have done in the past is to write the wanted files into a workfile. Then at the end of the find operation you can go back through the file and read the results. The more files you match, the more you save, since you only ever have to keep one record in memory.
- another intruder with the mooring of the heat of the Perl
In reply to Re: File::Find hogging memmory.
by grinder
in thread File::Find hogging memmory.
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |