This is my understanding, which may be out of date. I'm always up for a fresher course in modern operating system design.
From the application, and its itinerant malloc()'s point of view, the memory space is a contiguous resource and once available, is always available. From the kernel's point of view, RAM pages that have not been accessed lately may be cached to some persistent store and then repurposed for other processes with a new virtual address. However, malloc() may choose to revisit that once-free()'d range and then the kernel needs to arrange some RAM to match it. A process which frees a chunk of memory in the middle of its address range does not shrink on the process list accounting; instead, the kernel over-books the RAM for multiple processes needing memory, and thrashes when the over-booking becomes significant enough that processes are contending for more RAM than is physically available.
--
[ e d @ h a l l e y . c c ]
In reply to Re: Re: Re: Hash Entry Deallocation
by halley
in thread Hash Entry Deallocation
by Pearte
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |