The problem with full-content-comparison, and something I'm trying to avoid, is that for sites like freshmeat.net, their main page is 142,178 bytes in length (a few minutes ago, it changes frequently). Having to fetch that same content potentially multiple times, then compare, would be extremely slow on slower client connections, especially if I'm going to discard it anyway as a duplicate.
Also, if my spider takes an hour to fetch a series of links from a site, and the first link is freshmeat.net and the last link in the fetch (an hour later) is www.freshmeat.net which now has a few extra things added to the top of the page since the fetch began (as they always do), the content will be different, but there is no need to fetch it again, since I already have "mostly" current content in the first link I fetched during this session.
I realize HEAD information is also not the best approach, because:
- Not all servers support HEAD
- Every time you HEAD a site, you'll get a differen Client-Date, which will change your comparison
- Multiple servers can serve the same content (ala google.com, which currently shows up with two hosts, while something like crawler1.googlebot.com reports 30 separate IP addresses).
- HEAD incurs a "double-hit" to the site, if the content is valid, I'd like to avoid the HEAD then GET on the same site, or for each link found.
It's definately a tricky subject, but I'm sure there are some ways to avoid doing this.
One I thought of while I was sleeping last night, was to constantly maintain a small Berkeley dbm (or flat file) of hosts and potential "duplicate" URIs which they are known to come from, and keep that current on the client side, and check that each time I start the spider up to crawl new content. | [reply] |