Chuma posted an interesting Long list is long challenge, last year October 2022. eyepopslikeamosquito created a separate thread one month later. Many solutions were provided by several folks.
My goal was keeping memory consumption low no matter if running a single thread or 20+ threads. Ideally, running more threads should run faster. It turns out that this is possible. Ditto, zero merge overhead as the keys are unique. Just move the elements from all the sub-maps over to a vector for sorting and output.
In a nut-shell, the following is the strategy used for the hash-map solutions in latest June 2023 refresh.
1. create many sub-maps and mutexes 2. parallel single-file via chunking versus parallel list of files 3. create a hash-value for the key and store the value with the key 4. determine the sub-map to use by hash-value MOD number-of-maps 5. there are total 963 sub-maps to minimize locking contention 6. randomness kicks in, allowing many threads to run
Thank you, Gregory Popovitch. He identified the last one-off error in my C++ chunking logic plus shared a couple suggestions. See issue 198. Thank you, eyepopslikeamosquito for introducing me to C++. Thank you, anonymous monk. There, our anon-friend mentioned the word parallel. So, we tried running parallel in C++. Eventually, chunking too. :)
In reply to Solving the Long List is Long challenge, finally? by marioroy
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |