| [reply] |
Just because a buffer has been freed doesn't mean it gets released to the system. Consider this case:
my $buf;
for (0..1000) {
$buf = 'A' x (1024*1024*100);
print "Run $_: Allocated\n";
undef $buf;
print "Run $_: Done\n";
}
That would keep fetching and freeing memory to the system if things worked in the way you want (that would be slow). In stead, your memory manager keeps some memory in reserve so that it doesn't do so bad in such cases (but doesn't do so well in best cases).
Memory management in general, but especially perl's, is complicated and not necessarily intuitive (but works very well). | [reply] [d/l] |
{
my $buf = "A" x (1024 * 1024 * 100);
print "Allocated " . length($buf) . " byte buffer\n";
}
print "Finished\n";
sleep(1000);
To make $buf go out of scope, but even then it still won't do what you expect. | [reply] [d/l] [select] |
Actually, the undef is more likely to return memory. When a variable goes out of scope, Perl keeps the memory assigned to it in case you use it again. Calling undef on it returns that memory to the pool. However, it may or may not return the memory to the OS as seen here.
| [reply] |
G'day DrHyde - indeed I tried your suggestion, as well as several other variations before posting my question. In the end I settled on the code that seemed to express my question most clearly, not necessarily the code I would use :-)
(Actually the code and question came to me from a colleague, and I was working variations on *his* code).
But I still have a question - when the 100Mb string is allocated I see perl use 200Mb of ram, and after I have undef'd the scalar the memory use drops to 100Mb.
I'm at a loss to explain this behaviour - why would the 200Mb drop to 100Mb? I would have thought it should either drop to nothing (if the buffer was released) or stay at 200Mb (if the buffer was not released).?
regards, Anthony
| [reply] |
Always remember that an operating system is, by design, extremely “lazy.” In other words:
- When your process makes the memory-allocation request, the operating system will first make sure that it can carry out the request without grossly-overcommitting the memory resource. (A certain amount of overcommit is fine.)
- When your process actually does something, to create actual pressure upon a resource, only then will the operating-system begin to take action. If (in a different scenario) you asked for 100MB but only filled 2MB with anything other than zeroes, the actual pressure that you exerted was 2MB, not 100MB.)
- (Your process did exert 100MB of actual pressure, one 4K-page at a time, because it did fill all those bytes with something...)
- When your process releases a resource, once again the resource will be “marked as releaseable,” but the resource will actually remain nearby until, and unless, actual pressure is manifested from somewhere-else that compels the operating system to begin reallocation.
You see, the odds are good that a process's future behavior will be similar to its recent behavior. Processes that are making use of big buffers (and that are not brilliantly written by their designers...) are likely to be grabbing and releasing those big buffers in a loop. Programs that have been run recently are much more likely to be run again soon, than are any that were not. Files that were used recently are the same way. So, there are plenty of extremely good reasons for the operating system to say, “if it's not actually squeaking, or if the squeaking doesn't matter to anyone else right now, don't bother to grease it.”
| |