in reply to Re^2: child process dies soon after fork() call
in thread child process dies soon after fork() call

I am not aware of anything like that. That isn't to say that there isn't a limit that I am not aware of though. I am neither a sysadmin nor an expert on Linux internals. (However googling for SHMMAX, that should be entirely unrelated unless you are deliberately using shared memory.)

However one question that comes up is whether all of the children are loading the same instance of a Bayesian classifier model. If so then you can save on RAM by forking one child, having that one load the Bayesian classifier model, then having it fork itself into 4. That will result in the 4 children sharing a lot more memory. As they continue to work, some of that memory will come unshared, but it may save you overall.

Now why are you running out of memory? I don't know. In theory you have 16 GB of RAM available to you. However it is possible that other things are using most of it, or that some sysadmin has set a ulimit on how much memory the user you're running can access. Whatever the case the behavior you describe is consistent with your running out of RAM at close to 2 GB.

But that is testable. You just need to create several deliberately large processes and see where they run out of RAM.

  • Comment on Re^3: child process dies soon after fork() call