I do prepare a rather huge hash (some GBs). then I start several (8) threads (fork) doing alot of reading and abit of writing on this hash. because of copyOnWrite, perl basicly creates already a copy of the hash just by reading it.
anyway, its not a real problem while the threads are running, but when I want to exit() the threads, then it takes ages and the server even starts swapping.
I guess it is the garbage collection kicking in...but why would it take temporarily so much memory (several times as much it needed during running) that the server starts swapping?
So my question is: what is really happening here and what can I do to avoid swapping and improving performance? pseudo code:
problem: when it reaches exit(), then the server waits ages and during that time it uses more memory then before and sometimes starts swappingcreate_huge_hash() fork() child_thread: read_and_write_hash(); exit();
thanks ahead for your answers :)
In reply to problems with garbage collection by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |