in reply to ithreads weren't the way.. still searching
...ithreads functionality doesn't even come close to what it would take for this to work.
Pardon me, but poppycock!.
From the scant information supplied, you want to fetch a sequence of pages concurrently and then re-assemble them, in the original order.
Off the top of my head, I'd do something like this:
One to supply the "$seq_no:$url" to the threads,
One to return the fetched page "$seq_no:$contents" to the main thread.
There is plenty of scope in there for overlapping the appending and processing with the fetching. The main thread can dequeue the returns, process those that come out in the right order and store out-of-sequence returns in a hash for easy lookup. Each time it completes processing one set of content, it looks first in the hash to see if the next in sequence is available. If not, it goes back to dequeuing until it gets it.
With a little more ingenuity, the main thread could start another thread to do the processing that waits on a third Q. The main thread then dequeues and either re-queues to the processing thread or buffers in a hash.
The processing thread then performs the final disposal of the processed accumulated data, whilst the main thread blocks waiting for it to finish.
It's actually a very good use of threads and very straight forward to code.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: ithreads weren't the way.. still searching
by hlen (Beadle) on Oct 01, 2004 at 05:16 UTC | |
Re^2: ithreads weren't the way.. still searching
by meredith (Friar) on Oct 01, 2004 at 04:25 UTC |