in reply to Memory management with long running scripts

(2) Are there known issues with versions of Event that would cause this issue?

I looked at Event:

* $event->remove This removes an event object from the event-loop. Note that the object itself is not destroyed and freed. It is merely disabled and you can later re-enable it by calling $event->add.
Maybe there is an issue with how these Event's are handled?

(4) Can the use of eval() cause this sort of issue?

I'm not sure how you are using eval. If you are eval'ing some ever increasing thing - that would take more memory.

(3) Is there a fundamental difference in how perl allocates memory for anonymous arrays/hashes vs @arrays and %hashes? (ie stack vs heap?) that would affect memory management?

To my knowledge no. Perl does not "free" memory back to the OS, once it has it, it is not returned. There is a big difference in allowing Perl to reuse the memory that it already has for itself (e.g. "destroying Perl objects, etc).

  • Comment on Re: Memory management with long running scripts

Replies are listed 'Best First'.
Re^2: Memory management with long running scripts
by bulk88 (Priest) on Jul 21, 2012 at 20:33 UTC
    To my knowledge no. Perl does not "free" memory back to the OS, once it has it, it is not returned. There is a big difference in allowing Perl to reuse the memory that it already has for itself (e.g. "destroying Perl objects, etc).

    Not true on Windows Perl, on unix perl, I dont know. I've heard on PerlMonks that unix malloc uses one continuous memory block for malloced memory that grows upwards sequentially (sbrk style). MS C Lib/Windows malloc uses different non contiguous pools and allocations over a certain size basically go straight to the VM paging system (mmap style) and get random blocks of paging memory. According to p5p, until this or last month, compiled OPs were not freeable or something similar. Weak references used to leak in 5.10, and I think it was fixed in 5.10.1 (personally ran into that). So there is a realistic chance your leak in Perl and not XS modules. 5.8 is very old.

    Update, weak ref leak is https://rt.perl.org/rt3/Public/Bug/Display.html?id=56908.
      I am certainly willing to learn something new!
      Can you make a Perl process on Windows that allocates and uses a large amount of physical memory, say 500 MB worth. And then show that Perl "released it" to the OS? Without the Perl process termination? I am running 5.10.1 Perl.

        Interesting. FWIW, on Windows 7 (monitoring with Task Manager) and running Strawberry 5.14.2.1, the program below will release (very) roughly half of the allocated memory when the undef $s is done. If the  undef statement is changed to  $s = ''; nothing is deallocated.

        Half?!? What this means I do not know. (But see Update2 below.)

        >perl -wMstrict -le "my $s = '.' x 500_000_000; sleep 5; undef $s; sleep 10; "

        Update: Same results with Strawberry 5.10.1.5.

        Update2: Actually, I think I understand a little of what is going on. From the Task Manager memory stats, when the  '.' x 500_000_000 expression of the  my $s = '.' x 500_000_000; statement executes, it consumes about 500M to build the string. The string is then copied to the  $s scalar, thus consuming another 500M, for a total of 1G. It appears the first 500M (used to build the string) is never deallocated. Only the memory consumed by the  $s scalar is deallocated when it is undef-ed.