I am concerned about the overhead of fork() because of the amount of data that I intend to have cached in shared memory. When process reaping occurs, Perl's garbage collection attempts to free memory that is really shared memory. The Linux kernel will then attempt to copy all of the shared memory for Perl so Perl can clear it. The concern is less with the actual fork() as much as it is with the reaping of children.
Maybe I didn't understood your point well, but if your are sharing a big amount of data between childs, why not keep a "master" process that feeds the childs with chunks of data (i.e.: using pipes) when they need it, instead of keeping a copy of the cached data in each process's memory.
Also, do you think that changing some data in one process auto magically reflects in the other processes states? Because this doesn't happens with fork.
If I'd be in your shoes, I'd benchmark a small, memory eating problem using perlthrtut, my implementation with fork, my implementation using select and an implementation based on a single process (maybe with some forked helpers) using POE
In reply to Re: Reliable asynchronous processing
by Ultra
in thread Reliable asynchronous processing
by Codon
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |