I'm thinking of using a Perl program as a daemon on Linux. It'll periodically be using fork() to create separate processes and these separate processes will all die off within the next half our or so, after they've done their job.
I will be telling the OS that I do NOT want to wait on the child processes when they die, so I should have no zombies in the status table.
From what I've read, it appears that when a child processes I created with fork() dies, it'll release all the memory I used. I see that on something like Windows, it's a pseudo-process that will keep some memory tied up, but I don't see anything stating that for Linux.
Could someone please confirm this for me? In other words, if I have a Perl daemon on Linux, and it forks() child processes which later die, will this continual forking create a memory leak, or would I be okay with a number of child processes that tend to die off?
In reply to Memory Usage with fork() on Linux by HalNineThousand
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |