Instead I thought I'd try doing both files at once. Like so:open IN, '<', $ARGV[0]; my %hash1; while (<IN>) { next unless ( $_ =~ m/^\@HWI/ ); my ($header) = split(/ /, $_); $hash1{$header} = 1; } close IN; open IN2, '<', $ARGV[1]; my %hash2; while (<IN2>) { next unless ( $_ =~ m/^\@HWI/ ); my ($header) = split(/ /, $_); $hash2{$header} = 1; } close IN;
While the multithreaded code works, it's about 50% slower than the first one, so nothing is really accomplished there. It seems to me that maybe the problem is that the threads take a long time passing the created hashes back to the main script. So it's shuffling things around in memory, but I'm on really thin ice here I must admit. Any input is appreciated.my @threads = ("1","2"); # Loop through the array: foreach(@threads){ # Tell each thread to perform our 'parseLines()' subroutine. $_ = threads->create(\&parseLines, shift(@ARGV)); } #Tries to join the running threads #Some check implemented to avoid quitting the loop, before everything +is joined my @running = threads->list(threads::running); #Array of running threa +ds my @joinable = threads->list(threads::joinable); #Array of joinable th +reads my @catcher; while (scalar(@running) != 0 || scalar(@joinable) > 0) { #While as lon +g as there are running or joinable threads @running = threads->list(threads::running); #Repopulate running, n +ot sure if needed. foreach(@threads){ if ($_->is_joinable()) { push(@catcher, $_->join()); #Put's parsed file as hash-ref int +o array } } @joinable = threads->list(threads::joinable); @running = threads->list(threads::running); } sub parseLines{ open IN, '<', $_[0]; my %hash; while (<IN>) { next unless ( $_ =~ m/^\@HWI/ ); my ($header) = split(/ /, $_); $hash{$header} = 1; } close IN; return \%hash; }
In reply to Using threads to process multiple files by anli_
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |