in reply to Re: Multithread Web Crawler
in thread Multithread Web Crawler
A web crawling application is not going to see benefit from the light-weightedness of multiple threads since it is by it's nature fairly heavy.
If you decide that threads don't really hold an advantage for your application you can save yourself a whole load of work by forking off processes.
As pointed to in a recent node, Parallel::ForkManager might be of use to you. The module description includes:
This module is intended for use in operations that can be done in parallel where the number of processes to be forked off should be limited. Typical use is a downloader which will be retrieving hundreds/thousands of files.Sounds right up your tree? Or is that down your tree? (I never did work out where the roots for a red-black tree would go).
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^3: Multithread Web Crawler
by xuqy (Initiate) on Sep 23, 2005 at 13:40 UTC |