Well the kind of model that i have thought off is that the application starts , based on the no of children i can fork( the app starter defines it based on the server load), i fork that many children.
I have to concur that there is no benefit to mixing forks and threads in the way you have. If you want 9 threads to run, starts 9 threads in a single process rather than forking 3 times and starting 3 in each.
I've spent a good while going over the code you've posted, and reading your descriptions of the application, but I still can't make sense of what you are actually trying to do. You've described how you think you should do something, but no real detail of either what you are doing, or why you think you should do it this way.
For example: You start your query threads with a subsection of the work items. The your architecture calls for those threads to process one item then signal the main thread and suspend, whilst the main thread starts another thread to further process the results obtained. And, presumably, once the started thread finishes that further processing, it signals the main thread and dies, and the main thread signals the suspended thread to move onto the next work item.
That's way too complicated and very wasteful of resources. You are using two threads to process each work item, but only one of them can actually run at any given time. And you are going to have to start a second thread (an expensive process) to finish processing each work item, whilst the thread that started processing that work item sits around idle. Not to mention all the complexities of the signalling.
If would be far better to have the worker threads:
The basic pseudo code for the main thread is:
And basic pseudo-code for the worker threads is:
sub worker { my( $Q ) = shift: while( my $workItem = $Q->dequeue ) { ## Perform query ## Perform comparison ## Perform output/cleanup } }
No signalling, no locking, no forking, no user-explicit sharing, and completely scalable. The queue manages the entire process without any further effort.
Just start with one worker thread until you sure that the processing logic is correct. Then increase the number slowly until you see no further improvement in the throughput. The processing of each item is completely linear, but multiple work items are processed concurrently. Very low complexity, no timing issues or deadlock possibilities.
The only additional complexity I foresee, reading between the lines of your various posts, is that if you are outputting your results to a single file, then you would need to employ a mutex to prevent the output from the worker threads getting interleaved. But that involves just a single shared variable and a simple lock:
## in the main thread: my $outputMutex : shared; ... open OUTFILE, '>', ... ## In the worker threads ... { lock $outputMutex; print OUTFILE ... }
I seriously urge you to consider what benefits you think you will get from mixing forks and threads? Actually, on the basis of the information available so far, you could probably write your application to use either, but mixing the two is completely unnecessary as far as I can tell.
Likewise, what benefit is there in suspending one thread and starting another to finish the processing of single work item? Especially in the light of the cost of starting and discarding use-once threads, and the complexities of the signalling it requires.
In reply to Re^5: Problem in Inter Process Communication
by BrowserUk
in thread Problem in Inter Process Communication
by libvenus
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |