Using threads together with threads::shared (to share the hash) might help, or it might make things worse.
If your problem is that your script is running too slow because you process a lot of files, or big files, you can try to see if that time is spent mostly reading, or processing. Just remove the content of the while loop and write while(<$fh>) {} and see if it runs significantly faster. If it does not, the issue comes from reading the files, and chances are multiple threads won't be able to do anything about it, because if all those folders are on the same device all your threads will have to wait for that device to be available and they will just run in a sequence in the best case scenario (they may force the device to jump from one file to another in a worse case).
You may be able to win a little time by not reading line by line but going directly for the first occurence of "area=", by setting the input record separator:
{ # block to limit the effect of local local $/ = "\narea="; <$fh>; # read until the first "\narea="; } # here we are back to reading until the end of the line if (not eof $fh) # if the end of the file hasn't been reached while tr +ying to find "area=" { $hash{$dir}{area} = <$fh>; # read the rest of the line }
Edit: it seems there can only be one "area=" for each file, so I have removed the outer loop. You can stop reading the file as soon as you have found a result with last if you keep your current code. Also, my example would ignore an "area=" line if it is the first of the file.
In reply to Re: hashes & threads
by Eily
in thread hashes & threads
by gravid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |