in reply to Re^4: memory leaks with threads
in thread memory leaks with threads

I didn't understand what you wrote about my code, since threadscount is locked until it's increased and $count=$threadscount. Could you perhaps explain this ?

If main creates a new thread, then it exists, and therefore is consuming memory, before it can possibly increment $threadscount. So, if a thread gets a lock on $threadcount, and then gets swapped out holding that lock, and main gets a timeslice, then main will go into a tight loop creating 300 threads. Each of those threads. when it gets a timeslice, will attempt to get a lock on $threadscount, but there is still a lock held by another thread, so they will block.

The result is, you've just created 300 new threads that each occupy memory, but that will never show up in the $maxthreads count, because they are all blocked from incrementing $threadscount.

Actually, that is only one of several scenarios in your code that would lead to $maxthreads not reflecting the true situation.

Could you try this also? It absolutely guarentees that there are never more that $MAX threads (default=10) threads running concurrently. It uses the thread id of the latest thread created to monitor how many threads have been created so far.

On my system with $MAX set to 10, the memory usage wobbles a (very) little either side of 6MB, but even after 100,000+ creation/join cycles, it never displays any sign of long term memory growth.

use threads; use threads::shared; our $MAX ||= 10; $|++; my @threads; while( 1 ) { push @threads, threads->create( sub{ 1; } ) while @threads < $MAX; printf "\rCreated so far; %d\t", $threads[ -1 ]->tid; $_->join for @threads; @threads = (); }

What do you see?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Replies are listed 'Best First'.
Re^6: memory leaks with threads
by misc (Friar) on Jul 09, 2007 at 18:49 UTC
    Thanks for your explanation.
    The last script however grows very fast and constantly, after 1 minute it has about 30MB resident memory consumption.
    I tried with perl 5.8.8 and perl 5.9.5.

    I also checked /proc/$PID/status
    The number of threads is not growing above $MAX.

    Is there a possiblity that this is somehow related to my dual core processor ?

      Hm. This appears to be a platform specific problem then, because I left the last script running and after close to 1/4 million thread creation/destroy cycles, the memory use is still rock steady:

      C:\test>td-threads Created so far; 212710 Image Name PID Session Name Session# Mem Usag +e ========================= ====== ================ ======== =========== += tperl.exe 2284 0 6,456 +K c:\test>tperl -v This is perl, v5.8.6 built for MSWin32-x86-multi-thread (with 3 registered patches, see perl -V for more detail) [...] Binary build 811 provided by ActiveState Corp. http://www.ActiveState. +com ActiveState is a division of Sophos. Built Dec 13 2004 09:52:01 [...]

      Which OS are you running? Maybe you should raise a perlbug against 5.9.5.

      Is there a possibility that this is somehow related to my dual core processor ?

      My gut says no, but I don't have a dual core to with which to verify my gut one way or another.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I'm running Linux, 2.6.18, 32bit on a turion x2 tl-60.

        I just ran a similar program written in c.
        It doesn't grow (stays exactly at a res memory of 944B), also after 15 million threads created and destroyed.
        So I can at least say it's not related to my pthreads library.

        I'll however wait a bit before submitting a bug report, maybe someone else knows a solution.
        #include <iostream> #include <cstdlib> using namespace std; #include <pthread.h> #include <unistd.h> #include <sys/select.h> #include <sys/time.h> using namespace std; static int threadscount = 0; pthread_mutex_t threadscount_mutex = PTHREAD_MUTEX_INITIALIZER; void* thread( void* data ){ pthread_mutex_lock( &threadscount_mutex ); threadscount--; //cout << threadscount << endl; pthread_mutex_unlock( &threadscount_mutex ); pthread_exit(0); } int main(int argc, char *argv[]){ int count = 0; int a; struct timeval timeout; while (1){ for ( int a = 1; a <= 10; a++ ){ pthread_mutex_lock( &threadscount_mutex ); threadscount++; pthread_mutex_unlock( &threadscount_mutex ); pthread_t p_thread; /* thread's structu +re */ pthread_create(&p_thread, 0, thread, (void*)&a +); pthread_detach( p_thread ); count ++; } cout << "count: " << count << endl; int c; do { timeout.tv_usec = 100; timeout.tv_sec = 0; select( 0, 0, 0, 0, &timeout ); pthread_mutex_lock( &threadscount_mutex ); c = threadscount; pthread_mutex_unlock( &threadscount_mutex ); } while ( c > 0); } }
        Is there a possibility that this is somehow related to my dual core processor ?
        My gut says no, but I don't have a dual core to with which to verify my gut one way or another.

        I've seen that if one has a single core, then one can avoid many race conditions because only one thread ever runs at a time. I've seen several cases of code that works great "forever" on a single-core system but gets confused when run on a multi-processor or multi-core system. Losing track of resources and thus leaking memory is certainly a possible outcome of such race conditions.

        It isn't terribly hard to test such either, just bind the process to a single processor / core and see if the memory leak goes away.

        The leak being that dramatic vs. not at all is a rather stark difference to explain with a race condition, however, so a configuration difference seems more likely.

        - tye