in reply to Re^4: System call doesn't work when there is a large amount of data in a hash
in thread System call doesn't work when there is a large amount of data in a hash
It is weird that there is no easy solution for this, is this also with python or other languages?Yes, any huge process trying to call fork() on this system will have the same problem. Well, I don't know that for sure, but we can check. Try to run this C program on the CentOS system:
/* * Pass this program the number of gigabytes it should * allocate before forking as the only command line argument * (fractions are accepted). For example: ./a.out 250 * It will print any errors it encounters. */ #include <stdlib.h> #include <stdio.h> #include <unistd.h> int main(int argc, char ** argv) { void *ptr; if (argc != 2) return -1; ptr = malloc(1024 * 1024 * 1024 * atof(argv[1])); if (!ptr) { perror("malloc"); return -2; } if (fork() < 0) perror("fork"); free(ptr); return 0; }
(To compile and run the program, try gcc program.c -o program && ./program 250 or even make program && ./program 250.)
Unless the language uses vfork() or posix_spawn() in its system() implementation, the call will fail. Perhaps the administrators of the CentOS machine could shed some light on the problem? They might know what to do better than me if you tell them that your processes have trouble forking when they allocate more than 50% RAM.
|
---|