in reply to Recursive HTTP Downloads - without using WGET

You could try using HEADs first and comparing the meta data provided by the headers: size; expiry date etc. Perhaps coalescing all the non-variable headers into a single string and then md5 hashing it.

But, on most servers, it costs almost the same in terms of resources to respond to a HEAD request as it does to a GET -- indeed, often the only difference is that the body of the request is discarded after its size has been measured. And for those you wish to keep, you then have to do the GET anyway and so the net cost is greater.

That's why WGET does it the way it does. Other than for large binary downloads -- images, video music etc. -- the net cost of doing heads rather than a mix of HEADs abd GETs is less with the former.

As you cannot measure the size of the content, or checksum it, until you have downloaded it; there is little better you can do unless you are entirely comfortable that rejecting links on the basis of their urls is a viable option for you.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

  • Comment on Re: Recursive HTTP Downloads - without using WGET