(OP here)

salva, yes that was my thinking, too. With the advice here and quite a few optimisations it looks as if I can push this up. More tweaking, I think. That exit advice is new to me. How would I use that in the context of Parallel::ForkManager?

matija, good point. I'll eventually use ReiserFS which has superb support for large numbers of files, but I should probably use your approach now. I agree that it would probably give better performance.

Regarding HTTP::GHTTP and HTTP::MHTTP, MHTTP doesn't respect the Host header thus can't handle virtual hosts. GHTTP is indeed nice, but it neither supports HTTPS nor the features of LWP. (My main attraction to HTTP::Lite was that it was pure Perl and easy enough to hack to get the remote IP address -- now I can get that from LWP, Lite is less useful). It looks as if LWP supports using GHTTP internally, though, which sounds like a win-win. :-) I'll have to run some benchmarks on this...

merlyn, I'm afraid I do have to hit this number of external URLs. :-) It's for a research project that does have many merits. (I don't agree that we don't need a better search engine, but I guess that's academic). I'm going some way to support the Robots Exclusion Protocol. I do pre-processing on the list of URLs to identify the few hosts which will be hit more than a couple of times, then fetch their robots.txt. If they forbid crawling, I nix them from the input. By working with batches indexed by the hash of the URL I severely reduce the risk of hitting any server too hard (the host would have to have more than a trivial number of URLs in the index whose hashes start with at least the same two characters (even more when I implement matija's suggestion)). I've just wrote a script to double check this and only two hosts have multiple URLs in the same job bin: one has 2, the other 3. I appreciate your concern -- I run large sites myself, and am perfectly aware of the damage a runaway spider can cause. ;-)


In reply to Re: Advice on Efficient Large-scale Web Crawling by Anonymous Monk
in thread Advice on Efficient Large-scale Web Crawling by Anonymous Monk

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.