You could try using HEADs first and comparing the meta data provided by the headers: size; expiry date etc. Perhaps coalescing all the non-variable headers into a single string and then md5 hashing it.
But, on most servers, it costs almost the same in terms of resources to respond to a HEAD request as it does to a GET -- indeed, often the only difference is that the body of the request is discarded after its size has been measured. And for those you wish to keep, you then have to do the GET anyway and so the net cost is greater.
That's why WGET does it the way it does. Other than for large binary downloads -- images, video music etc. -- the net cost of doing heads rather than a mix of HEADs abd GETs is less with the former.
As you cannot measure the size of the content, or checksum it, until you have downloaded it; there is little better you can do unless you are entirely comfortable that rejecting links on the basis of their urls is a viable option for you.
In reply to Re: Recursive HTTP Downloads - without using WGET
by BrowserUk
in thread Recursive HTTP Downloads - without using WGET
by Preceptor
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |