in reply to Eliminating "duplicate" domains from a hash/array

Use one of the DNS modules to lookup the A records. This will give you IP addresses which you can check against each other. Alas, this is far from fullproof: load sharing www servers via DNS will give you different IPs each time you lookup, and also a single server can serve many sites, all of them will have same ip (apache virtual names.) One step better is head information, but still not fullproof. Ultimately only full content comparison is 100% proof of content similarity or otherwise. Chris
  • Comment on Re: Eliminating "duplicate" domains from a hash/array

Replies are listed 'Best First'.
Re: Eliminating "duplicate" domains from a hash/array
by hacker (Priest) on Mar 31, 2003 at 12:02 UTC
    The problem with full-content-comparison, and something I'm trying to avoid, is that for sites like freshmeat.net, their main page is 142,178 bytes in length (a few minutes ago, it changes frequently). Having to fetch that same content potentially multiple times, then compare, would be extremely slow on slower client connections, especially if I'm going to discard it anyway as a duplicate.

    Also, if my spider takes an hour to fetch a series of links from a site, and the first link is freshmeat.net and the last link in the fetch (an hour later) is www.freshmeat.net which now has a few extra things added to the top of the page since the fetch began (as they always do), the content will be different, but there is no need to fetch it again, since I already have "mostly" current content in the first link I fetched during this session.

    I realize HEAD information is also not the best approach, because:

    • Not all servers support HEAD
    • Every time you HEAD a site, you'll get a differen Client-Date, which will change your comparison
    • Multiple servers can serve the same content (ala google.com, which currently shows up with two hosts, while something like crawler1.googlebot.com reports 30 separate IP addresses).
    • HEAD incurs a "double-hit" to the site, if the content is valid, I'd like to avoid the HEAD then GET on the same site, or for each link found.

    It's definately a tricky subject, but I'm sure there are some ways to avoid doing this.

    One I thought of while I was sleeping last night, was to constantly maintain a small Berkeley dbm (or flat file) of hosts and potential "duplicate" URIs which they are known to come from, and keep that current on the client side, and check that each time I start the spider up to crawl new content.