in reply to Re: threads::shared seems to kill performance
in thread threads::shared seems to kill performance

Thank you. I considered not-sharing, but that would effectively mean each thread would be setup with a copy of the original 240MB array, would it not?

I was afraid this would quickly kill my memory, but thinking about it now, isn't there a chance this would be copy on write only? And thus even 1000 threads would (considering I only do reads) still only use 240MBs of memory?

  • Comment on Re^2: threads::shared seems to kill performance

Replies are listed 'Best First'.
Re^3: threads::shared seems to kill performance
by Preceptor (Deacon) on Jul 18, 2013 at 08:16 UTC

    Couldn't say myself, without trying it. I know some modes of parallel processing take memory copy-on-write, and others don't. I'm pretty sure the Unix 'fork' does that, for example. I've never had occasion to check whether threads do too.

    It may not be viable, but depending on frequency of reading array, you might find you can have a 'handler' thread, that services requests for data from the hash

    Otherwise - your code is all about initially creating the hash. How does it perform once that's finished? It may be worth the overhead.