Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^19: Need help with WWW::Mechanize and Chrome cookies

by bakiperl (Beadle)
on Jul 19, 2021 at 14:09 UTC ( [id://11135166]=note: print w/replies, xml ) Need Help??


in reply to Re^18: Need help with WWW::Mechanize and Chrome cookies
in thread Need help with WWW::Mechanize and Chrome cookies

Corion, forget the JavaScript for now. The goal is to use a single method to download all type of files (images, pdfs, CSVs etc..)
According to the documentations, the following method should be able to do it.
my @links = $mech->find_all_links(url_regex => qr/\.pdf/i); my @urls = map { $_->url_abs } @links; foreach (my $foo @urls ) { my $abs_path = "C:/path"; $mech->set_download_directory( $abs_path ); $mech->get($foo); }
Unfortunately this method works only with files that the browser normally does not load such as (.csv). If you use this method with images (.jpg etc...) or PDFs (.pdf) the documents will load in the browser and the download fails. This exact same task has worked with WWW::Mechanize using the following method
my $filename = "C:/path/filename"; my $foo = "http link"; $mech->get($foo, ':content_file'=>$filename);
The workaround that you have suggested below didn't work with PDFs and CSVs. it worked with images only.
my $response = $mech->get($url); my $c = $mech->content; # Dummy request to initialize everything open my $output, '>:raw', '/tmp/output.jpg'; print { $output } $response->decoded_content;

Replies are listed 'Best First'.
Re^20: Need help with WWW::Mechanize and Chrome cookies
by Corion (Patriarch) on Jul 27, 2021 at 19:20 UTC

    The following gets at the content of images displayed on a page:

    #!perl use strict; use warnings; use 5.012; use WWW::Mechanize::Chrome; use Log::Log4perl ':easy'; Log::Log4perl->easy_init($TRACE); use File::Temp 'tempdir'; use Cwd; my $tempdir = tempdir(); my $mech = WWW::Mechanize::Chrome->new( headless => 1, data_directory => $tempdir, download_directory => cwd(), ); use Data::Dumper; my $res = $mech->get('https://egp.rutgers.edu/cgi/wmc.pl'); say Dumper $mech->getResourceTree_future()->get; my $link = $mech->xpath( '//a[text()="MY IMAGE"]', single => 1 ); $mech->click($link); $mech->sleep(1); my $resources = $mech->getResourceTree_future()->get; my @images = grep { $_->{type} eq 'Image' } @{$resources->{resources}} +; my $image = $mech->getResourceContent_future( $images[0]->{url} )->get +->{content}; open my $fh, '>:raw', 'test.jpg'; print $fh $image;

    Note that you will need a way to find which image is the one you want.

      Corion,
      Many thanks. the code has worked for a single image download. How about if you loop over multiple images like this:
      my @ids = qw(101 102 103 104 105); foreach my $id ( @ids ) { my $link = $mech->xpath( "//a[text()='MY IMAGE $id']", single => 1 + ); $mech->click($link); $mech->sleep(1); my $resources = $mech->getResourceTree_future()->get; my @images = grep { $_->{type} eq 'Image' } @{$resources->{resourc +es}}; print @images, "\n"; # this shows that the information in the arr +ay is not resetting my $image = $mech->getResourceContent_future( $images[0]->{url} )- +>get->{content}; open my $fh, '>:raw', $id.'.jpg'; print $fh $image; }
      This code does not necessarily save the correct images because it looks like the resources are not resetting. Thank you again.

        If the resources are not resetting, that is a bug in Chrome. I suggest that you enable trace logging and look at the information that goes over the wire between Perl and Chrome.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11135166]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (6)
As of 2024-04-23 19:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found