http://qs1969.pair.com?node_id=1069357


in reply to Threaded Code Not Faster Than Non-Threaded -- Why?

I don't understand this part , I think you have too many queues, and you should probably have two, one for files to process , and one for results of that processing

Also the closures don't make sense to me

The meat of the threading code :)

sub create_thread_pool { my $files_to_digest = shift; threads->create( threads_progress => $files_to_digest ); for ( 1 .. $opts->{threads} ) { my $thread_queue = Thread::Queue->new; my $worker_thread = threads->create( worker => $thread_queue ); $worker_queues->{ $worker_thread->tid } = $thread_queue; } lock $threads_init; $threads_init++; } sub get_dup_digests { my $size_dups = shift; my $dup_count = 0; my $queued = 0; $dup_count += @$_ for map { $size_dups->{ $_ } } keys %$size_dups; # creates thread pool, passing in as an argument the number of file +s # that the pool needs to digest. this is NOT equivalent to the numb +er # of threads to be created; that is determined in the options ($opt +s) create_thread_pool( $dup_count ); sub get_tid { my $tid = $pool_queue->dequeue; return $tid; } my $tid = get_tid(); SIZESCAN: for my $size ( keys %$size_dups ) { my $group = $size_dups->{ $size }; for my $file ( @$group ) { $worker_queues->{ $tid }->enqueue( $file ) if !$thread_term; $queued++; $tid = get_tid() and $queued = 0 if $queued == $opts->{qsize} + - 1; last SIZESCAN unless defined $tid; } } # wait for threads to finish while ( $d_counter < $dup_count ) { usleep 1000; # sleep for 1 millisecond } # ...tell the threads to exit end_wait_thread_pool(); # get rid of non-dupes delete $digests->{ $_ } for grep { @{ $digests->{ $_ } } == 1 } keys %$digests; my $priv_digests = {}; # sort dup groupings for my $digest ( keys %$digests ) { my @group = @{ $digests->{ $digest } }; $priv_digests->{ $digest } = [ sort { $a cmp $b } @group ]; } undef $digests; return $priv_digests; }

Replies are listed 'Best First'.
Re^2: Threaded Code Not Faster Than Non-Threaded -- Why? (meat)
by Tommy (Chaplain) on Jan 05, 2014 at 05:24 UTC
    I don't understand this part , I think you have too many queues, and you should probably have two, one for files to process , and one for results of that processing

    It comes straight from here. Which came straight from here. If my implementation of that core documentation code is flawed, I really want to see an implementation that isn't. I'm totally serious. I want to learn how to do it right.

    Tommy
    A mistake can be valuable or costly, depending on how faithfully you pursue correction

      I think you're misreading it - it includes examples of creating queues, but I don't see it implying that you need multiple queues.

      I have an example of a 'queue based' worker thread model: A basic 'worker' threading model

      Personally I'd be thinking in terms of using 'File::Find' to traverse your filesystem linearly, but have if feed a queue with files that need more detailed inspection. The two most expensive operations in this process are: Filesystem traversal (which is hard to optimise without messing with disks and filesystem layout). Also - reading the files for calculating their hashes - the reading of files may well be more 'expensive' than doing the sums. My thought would be to ask if you can do partial hashes, iteratively - if you work through a file say 'one block' at a time (varies depending on filesystem) you have a single read IO operation, that you then hash - and can work through a file if it's longer, until hashes don't match. If the file is a genuine dupe, then you'll still have to read the whole lot, but if it's not it'll discard faster.

        Thanks for the example! I'll check it out.

        I'm sorry I provided the wrong link. The code I wrote comes from this code taken directly from the examples directory of the threads CPAN distro by JDHEDDEN. It's called pool_reuse.pl

        The block by block comparison of files which you proposed is actually part of my next approach. I may be able to forgo the need to digest the file content altogether and get a real speed boost by only reading as many bytes from a file as I need to in order to tell that it's different. Much less IO required.

        Tommy
        A mistake can be valuable or costly, depending on how faithfully you pursue correction

        Second reply: By way of follow up I wanted to thank you Preceptor for the informative link. Also I didn't respond to your comment about File::Find. I didn't use it because it is slower than File::Util directory traversal in my own tests.

        I have forked the reference code for the hackathon and implemented the kind of threading consistent with your code example. More to come...I'm benching it right now.

        Tommy
        A mistake can be valuable or costly, depending on how faithfully you pursue correction