When I get back to work on Monday, I plan on updating the actual script. I'm dealing with files that have 30-45 million rows. I'm hoping spawning 10+ threads (on a 15 plus CPU machine) that are solely parsing the files should help reduce runtime; rather than sequentially working on each file one-by-one.
Here's my prediction: Even modified, your multi-threaded code ran significantly (an order or magnitude), more slowly than your single-threaded code.
And my solution: Iff you supplied me with the information I requested, I could process those same files to a hash in 1/10th the time that your single-threaded code does.
In reply to Re^8: Sharing Hash Question
by BrowserUk
in thread Sharing Hash Question
by jmmach80
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |