vitoco has asked for the wisdom of the Perl Monks concerning the following question:

I want to resume a truncated download of a BIG file from a remote host.

I'm currently using WWW::Mechanize:

$mech->get($url, ':content_file' => "$file.tmp");

I couldn't find any standard way to do that, other than to add the Range header by myself to the HTTP::Request object, and to append the sequence of temporary files.

Did I miss something? A parameter? An LWP::UserAgent method?

Thanks!!!

Replies are listed 'Best First'.
Re: Resume downloads
by Anonymous Monk on Aug 10, 2009 at 21:30 UTC

      AFAIK, that should be used to download the whole file again, if the modification time of the file in the server is newer than the whole file I've previously downloaded.

      What I have is about 40MB of a 45MB binary file from a interrupted connection, and I want to resume in a new one, skipping that range of bytes...

      I´m just looking for a standard way to do this, and not to write code from scratch using lower level methods.

        Oh, in that case there is no standard way
Re: Resume downloads
by mulander (Monk) on Aug 13, 2009 at 11:03 UTC
    Found and extracted from the google cache
    • Find out how much has already been downloaded
    • Create an HTTP Request (e.g. HTTP::Request from the LWP Bundle)
    • Set an additional header for the offset (Content-Range as specified in RFC 2616). HTTP::Headers from the LWP Bundle can do that.
    • Send the request and append the answer to the file.
Re: Resume downloads
by Anonymous Monk on Dec 01, 2011 at 08:32 UTC