knirirr has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to use the modules mentioned above to fetch zip files over https, using ActivePerl 5.8.9 on a Windows 7 machine. Here's a section of the code:

sub Download { my $u = shift; # URL (e.g. https://server.org/file.zip) my $p = shift; # output file path/name my $ua = LWP::UserAgent->new; $ua->timeout(10); my $req = HTTP::Request->new('GET', $u); my $res = $ua->request($req); print "Requesting $u\n"; open OUT, ">", $p or die "Can't save output: $!"; binmode(OUT); if ($res->is_success) { print OUT $res->content; close OUT or die "$!"; return 0; } else { print "Could not fetch $u\n"; close OUT or die "$!"; return 1; } }

The SSL connection seems to be negotiated correctly and the code is able to fetch a 500-byte file, but when I try files around 200K in size the script hangs. Does anyone have any suggestions?

  • Comment on Hang when downloading with LWP::Useragent and Crypt::SSLeay (Windows)
  • Download Code

Replies are listed 'Best First'.
Re: Hang when downloading with LWP::Useragent and Crypt::SSLeay (Windows)
by Anonymous Monk on Mar 24, 2010 at 17:17 UTC
    use WWW::Mechanize 1.60; my $mech = WWW::Mechanize->new(qw' autocheck 1 '); $mech->get( $uri, ':content_file' => $tempfile );
      Thanks - if that works with Crypt::SSLeay as well then I'll give it a try tomorrow, unless any fixes for LWP::Useragent crop up before then.
        WWW::Mechanize is a subclass of LWP::UserAgent , the get method works the same way.
Re: Hang when downloading with LWP::Useragent and Crypt::SSLeay (Windows)
by b4swine (Pilgrim) on Mar 24, 2010 at 17:42 UTC
    You might try increasing the timeout. The default value for timeout is 180 (which means 3 mins). You changed it to 10 secs.
      The timeout is for heartbeats, if 10 seconds passes without a socket being established or any bytes transmitted. knirirrs program wouldn't hang because the timeout is too short, it would end. Either the website is throttling his download, or he is hitting some kind of memory limit with his program.

        I know that the server is OK as I can run a similar script (which uses libcurl instead) from Linux machines. Is there any usable means of determining if it's a memory problem?

        I tried running the script with the -d option which did not reveal anything as it simply locked up at the same point without revealing why.