Problem is, when the file is very large (well that is all relative, but ~200-800M is what I'm talking about) and at some point, "system" calls (used to do "mv" and "gzip"...) fail with -1 and/or memory usage is sucked out of the box (HPUX 11). Each splitout file will be compressed and emailed (using Net::SMTP::Multipart though in new reengineered process, I chose MIME::Lite for the job...), then removed. At process end, the original fileset is moved and compressed also.
Since the process did not "use strict; use warnings" and was not properly constructed for failover & recovery (critical since mail relay server is on an entirely different host now..), I've rewritten the process as 2 separate scripts and a library routine. It's handled the volume and stress like a charm until I plug back in system calls (as opposed to a ksh to call the perl script for each splitout directory entry). This modification brought my machine to its knees, though it recoverd, but for a spout of 10 minutes, memory was exhausted.
Question to those of you Perl powers of higher accordance and blessings - how can this "system" call memory link be abated or even diagnosed? I've changed calls using 'system "rm ..."' to unlink but would it also serve better performance if I used a library like Compress::Zlib instead of 'system "gzip ..."' calls?
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |