in reply to Re: child process dies soon after fork() call
in thread child process dies soon after fork() call

No, RAM is not the issue. The server is a Ubuntu 8.04, has 8GB of RAM, the kernel is PAE enabled and it can see all 8GB or RAM. The swap is also not an issue - 8GB of swap. Unfortunately I can't make the children smaller. They all load a an instance of Bayesian classifier model trained on a large data set. The only real solution would be to write a server that loads that classifier, then launch several of those servers listening on different ports and then have the spawned children I mentioned earlier do some socket-level communication with servers in a round-robin fashion. So it's basically a way of offloading some of the data processing to separate instances and not inside the child processes that crash sometimes. So to reiterate, you are not aware of any restrictions on parent-child memory allocation? Nothing related to values of SHMMAX or stuff like that? The current value of SHMMAX is 32MB btw.
  • Comment on Re^2: child process dies soon after fork() call

Replies are listed 'Best First'.
Re^3: child process dies soon after fork() call
by tilly (Archbishop) on Jan 22, 2009 at 05:47 UTC
    I am not aware of anything like that. That isn't to say that there isn't a limit that I am not aware of though. I am neither a sysadmin nor an expert on Linux internals. (However googling for SHMMAX, that should be entirely unrelated unless you are deliberately using shared memory.)

    However one question that comes up is whether all of the children are loading the same instance of a Bayesian classifier model. If so then you can save on RAM by forking one child, having that one load the Bayesian classifier model, then having it fork itself into 4. That will result in the 4 children sharing a lot more memory. As they continue to work, some of that memory will come unshared, but it may save you overall.

    Now why are you running out of memory? I don't know. In theory you have 16 GB of RAM available to you. However it is possible that other things are using most of it, or that some sysadmin has set a ulimit on how much memory the user you're running can access. Whatever the case the behavior you describe is consistent with your running out of RAM at close to 2 GB.

    But that is testable. You just need to create several deliberately large processes and see where they run out of RAM.

Re^3: child process dies soon after fork() call
by roboticus (Chancellor) on Jan 22, 2009 at 09:52 UTC
    haidut:

    <wild_guess_mode> Perhaps you're running a 32-bit version of perl? That might constrain it to a 2GB address space. </wild_guess_mode>

    ...roboticus