Given an running init process and a parent process which is forked, you're limited to 2**15 - 2 = 32766 forked processes, depending on your system, also 2**16 - 2 = 65533 may be possible :-)
Okay, let's stop kidding.
There is no system- or Perl limit of forked processes, but each fork and each process need CPU time and memory.
Keep your parent as slim as possible before forking id you need many childs. This means: Load only modules used by all childs, don't put big things into variables, etc. Also keep an eye on child/zombie cleanup. Use a wait-loop from time to time to catch childs where you missed the CHLD signal.
I wrote a big forker deamon some time ago. It processes a Job queue where most jobs result in HTTP requests to external targets. The parent ran the queue, did some locking-checks on the queue items and then forked and used the child to load the item and process it. It was limited to a fixed amount of child processes and childs per HTTP target to stay within the servers resource limits but it was able to process more than 20.000 jobs per hour.
(There are other ways to do this, I know, but in this situation safty was more critical than good design and as too many people worked on the different job processors, this was the safest way to keep the daemon running no matter what a child did or not.) | [reply] |
There is no system- or Perl limit of forked processes...
ORLY?
$ ulimit -a | grep process
-u: processes 266
See also RLIMIT_NPROC in the setrlimit(2) man page on your favourite POSIX system.
The cake is a lie.
The cake is a lie.
The cake is a lie.
| [reply] [d/l] [select] |
Theoretically, given a Turing machine: unlimited
More practically, it depends on what your OS allows, and how it's limited for your account. In Linux, you can see this by using 'ulimit -a' and look for the line "max user processes"
| [reply] |