in reply to Advice on Efficient Large-scale Web Crawling

I would have contacted you privately on this, but that's hard to do since you aren't logged in.

However, I question the meta-question here. Do you really have 4 million (and growing) internal URLs to hit? And if so, why don't you just hit the database behind the URLs instead... no need to do an HTTP query when a DBI query will give you the same information.

But if not... if these are external URLs, do you have permission to fetch them for your purposes? I tolerate having Google and other search engines hit my site because they give value to me and my customers. For this, I put up with reduced CPU performance and increased bandwidth usage while these public search engines hit my 44000 photograph pages and 250 magazine articles. If you're setting up another search engine, we really don't need one. If you're using this information for your own gain, I really don't want you hitting my site. Also, if you're hitting public URLs, you should use a distinct robot agent and follow the robot rules.

So, it'll help me to help you in the future to establish some sort of context for this. Why the heck do you want to hit 4 million (plus) URLs "efficiently"?

-- Randal L. Schwartz, Perl hacker
Be sure to read my standard disclaimer if this is a reply.

  • Comment on Re: Advice on Efficient Large-scale Web Crawling