in reply to threads::shared seems to kill performance

Hmm, well, I'd sort of expect 5,000,000 &share calls to take a reasonable amount of time, yes. Hashes - particularly multidimensional ones - don't work well with thread::shared. What you've got is essentially a fudge that creates a lot of separate anonymous hashes, and links them together.

However if - as you say - your data is read only from your threads, you might not need to do that - if you initialise prior to instantiating your threads, they'll take a copy of your global namespace anyway. You just won't be able to modify it within the thread (or technically - you can, but it won't replicate to other threads).

  • Comment on Re: threads::shared seems to kill performance

Replies are listed 'Best First'.
Re^2: threads::shared seems to kill performance
by Jacobs (Novice) on Jul 18, 2013 at 04:51 UTC

    Thank you. I considered not-sharing, but that would effectively mean each thread would be setup with a copy of the original 240MB array, would it not?

    I was afraid this would quickly kill my memory, but thinking about it now, isn't there a chance this would be copy on write only? And thus even 1000 threads would (considering I only do reads) still only use 240MBs of memory?

      Couldn't say myself, without trying it. I know some modes of parallel processing take memory copy-on-write, and others don't. I'm pretty sure the Unix 'fork' does that, for example. I've never had occasion to check whether threads do too.

      It may not be viable, but depending on frequency of reading array, you might find you can have a 'handler' thread, that services requests for data from the hash

      Otherwise - your code is all about initially creating the hash. How does it perform once that's finished? It may be worth the overhead.