Unfortunately this method works only with files that the browser normally does not load such as (.csv). If you use this method with images (.jpg etc...) or PDFs (.pdf) the documents will load in the browser and the download fails. This exact same task has worked with WWW::Mechanize using the following methodmy @links = $mech->find_all_links(url_regex => qr/\.pdf/i); my @urls = map { $_->url_abs } @links; foreach (my $foo @urls ) { my $abs_path = "C:/path"; $mech->set_download_directory( $abs_path ); $mech->get($foo); }
The workaround that you have suggested below didn't work with PDFs and CSVs. it worked with images only.my $filename = "C:/path/filename"; my $foo = "http link"; $mech->get($foo, ':content_file'=>$filename);
my $response = $mech->get($url); my $c = $mech->content; # Dummy request to initialize everything open my $output, '>:raw', '/tmp/output.jpg'; print { $output } $response->decoded_content;
In reply to Re^19: Need help with WWW::Mechanize and Chrome cookies
by bakiperl
in thread Need help with WWW::Mechanize and Chrome cookies
by bakiperl
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |