I have to write a daemon that will fetch URLs from different servers (each document will be no more than 100kb) in parallel, at least 20 requests running concurently (the more, the better).
I have to choose what modules to use. I've heard LWP is slow and too CPU-intensive task for crawlers (at least this is told in WWW::Curl::Multi suggesting to use itself for crawlers), while WWW::Curl::Multi is broken (I've reported bugs in RT).
What options do I have besides LWP? I'm considering to use threads and WWW:Curl::Easy to run downloader inside each thread.
This has to run on Linux. Ideally it should run on Virtual Private Server (hoster permits running spiders there), if possible (so don't answer "use LWP and buy server with 8-core Intel CPU).
Thanks in advance for your answers!
In reply to what modules you recommend for downloading hundreds of URLs per second in parallel? by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |