Yeah, that could be one point where its failing. I have noticed that few times as well. But sometime all forks succeed ( I see 7 processes parent + 6 child in top, which validates that all 6 forks were successful) and then error crops up later.
Now let me give you what I have found out after doing as you guys suggested. This time I ran the program 4 times and all 4 times fork failed with ENOMEM. But as I say system staus says there is enough RAM + swap available.
Total Memory-RAM : 8G, 1.6G free
Swap : 10G, 2.5G free when the error occurs.
Now parent process takes about 1.6G memory and all child processes also take up same 1.6G memory. Fork failed after 4 child processes were spawned. So logically there is no reason to get an ENOMEM in fork - right? Your thoughs please.
Here's what I have found : If I keep allocating memory from within a process
perl -le 'while () {$a.="x"x(1<<26);}'
, after the allocated memory hits 4G for the process it terminates ( sometimes with malloc failure - Out of memory, sometimes with Segmentation fault)
So my question is is there any process level limitations (I guess my perl is 32 bit) on how much memory can be allocated ? Also can you guys please point out if there are other things I need to check like ulimit/swap space/ram/no of processes per user/no of open file descriptors/etc/etc to track this error? Also please leave pointers on how to check also.
Appereciate your help.
| [reply] [d/l] |
> perl -V|grep malloc
It should show "usemymalloc=n" if your perl supports more than 4 Gb. If it doesn't, you need to recompile perl or get another binary. I suggest using the most recent stable perl for this (5.8.6 at the moment).
I'm wondering if you can't get the algorithm to work without taking up more than 4Gb, though. :-)
| [reply] [d/l] |
Its my bad, I didnt make it clear I guess..this 4G limit was reached using a test program, NOT the actual program. Actual program uses 1.6G or so per process. Please see my prev post details.
| [reply] |