The purpose of the JobQueue is we want to preserve the order the commands are issued in ... This allows us to preserve the order because: if we dequeue/enqueue user2's, and one of user1's finishes and updates our database, then when we dequeue user3's command, it will see that M1 has an available resource, and run that job. That is what we want to avoid. Using it as a proper queue would not preserve the order in that fringe case without some more tinkering.
Hm. But, queues DO preserve order. That's kind of their raison d'être. (And they also 'close up the gaps' automatically!)
The problem -- I would suggest -- is that you are storing multiple requests as single items.
If instead, (in your scenario above), you queued two items for each of your 3 users; you can now process that queue, queue-wise; re-queuing anything that isn't yet ready and discarding (moving elsewhere) anything that is complete and the queue will tale care of keeping things in their right order and ensuring that the 'wholes' get closed up, all without you having to mess around with indices trying to remember what's been removed and what not.
Just a thought.
That said; I'm still not clear on why you need the shared %nodes hash, when the results from the jobs are returned to the main thread via the $Qresults?
In reply to Re^7: Thread terminating abnormally COND_SIGNAL(6)
by BrowserUk
in thread Thread terminating abnormally COND_SIGNAL(6)
by rmahin
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |