Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
I do prepare a rather huge hash (some GBs). then I start several (8) threads (fork) doing alot of reading and abit of writing on this hash. because of copyOnWrite, perl basicly creates already a copy of the hash just by reading it.
anyway, its not a real problem while the threads are running, but when I want to exit() the threads, then it takes ages and the server even starts swapping.
I guess it is the garbage collection kicking in...but why would it take temporarily so much memory (several times as much it needed during running) that the server starts swapping?
So my question is: what is really happening here and what can I do to avoid swapping and improving performance? pseudo code:
problem: when it reaches exit(), then the server waits ages and during that time it uses more memory then before and sometimes starts swappingcreate_huge_hash() fork() child_thread: read_and_write_hash(); exit();
thanks ahead for your answers :)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: problems with garbage collection
by BrowserUk (Patriarch) on Jul 13, 2010 at 19:04 UTC | |
by tye (Sage) on Jul 14, 2010 at 06:43 UTC | |
by Anonymous Monk on Jul 14, 2010 at 18:16 UTC | |
|
Re: problems with garbage collection
by Corion (Patriarch) on Jul 13, 2010 at 18:15 UTC | |
by Anonymous Monk on Jul 14, 2010 at 16:09 UTC |