Thank you for your comprehensive and helpful answers!
I will need to think over it - but some quick remarks:
First some comments on the problem as a whole.
The input contains different record types because there is transaction data and master data. Most work is finding the right data in a transactional record do some (static) recoding and cross referencing via mapping-files. This I do in ParseDok - so "parse" is a little short hand :-)
But some data (the minor part) is depended on previous records. That means in record X a numbering change is announced and has to be applied for all following records.
So the writer thread is not only a writing but maintaining the original order and doing some filtering and code mapping as well. Sorry - I tried to keep my post short.
Push references to already shared hashes.I tried this - it was slower than the deep copy
Why do you want to queue hashes from one thread to the other in the first placeThis was how i did it in the single-threaded version. So my first try was so put the parsing into worker threads and pass back the existing hashes. Now I am working on a new solution. Hence my questions.
Going back to your original application rather than your wholly artificial test codeI did not thought it artificial because it is more or less the isolated code fragment of my thread handling. It is my test/experimental code for trying out new solutions. Sub ParseDok alone is ~1.100 lines of code (shure, not in one function!). I was interested in measuring time differences for passing data between threads, to get a feel for that.
In reply to Re^2: passing hashes between threads
by bago
in thread passing hashes between threads
by bago
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |