Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re^16: Need help with WWW::Mechanize and Chrome cookies

by Corion (Patriarch)
on Jul 18, 2021 at 07:17 UTC ( [id://11135124]=note: print w/replies, xml ) Need Help??


in reply to Re^15: Need help with WWW::Mechanize and Chrome cookies
in thread Need help with WWW::Mechanize and Chrome cookies

If a file does not download, have you tried inspecting the HTTP::Response object you receive from the ->get() call?

my $response = $mech->get($url); open my $output, '>:raw', '/tmp/output.jpg'; print { $output } $response->decoded_content;

Edit: You might need to touch the ->content of the browser first so everything has time to initialize first:

my $response = $mech->get($url); my $c = $mech->content; # Dummy request to initialize everything open my $output, '>:raw', '/tmp/output.jpg'; print { $output } $response->decoded_content;

Replies are listed 'Best First'.
Re^17: Need help with WWW::Mechanize and Chrome cookies
by bakiperl (Beadle) on Jul 18, 2021 at 16:31 UTC
    Corion,
    This code did the trick for image files but not for PDFs. I also noticed that it works only for simple urls in the page. It did not work for me when the images are displayed using JavaScript.
    I was hoping to find a way to block the browser from displaying the documents by using something like the Content-Disposition option.
    Content-Disposition: attachment; filename=$filename
    This has worked very well with WWW::Mechanize.
    Thank you.

      Your problem statement seems to continously change. I don't see/understand how you can get from "image download using its URL" to "image displayed using Javascript" without changing the problem entirely.

      Maybe you can come up with a small, self-contained program that reproduces your problem, and also shows the response you receive and also tell us what parts you have already investigated, instead of letting us guess. That way, we can more easily reproduce what you see and maybe suggest better approaches.

      If an image gets created using Javascript, you will have to fetch the image data from the browser memory, most likely by using some Javascript to fetch that image data. You can run arbitrary Javascript on a page using ->evaluate_in_page.

        Corion, forget the JavaScript for now. The goal is to use a single method to download all type of files (images, pdfs, CSVs etc..)
        According to the documentations, the following method should be able to do it.
        my @links = $mech->find_all_links(url_regex => qr/\.pdf/i); my @urls = map { $_->url_abs } @links; foreach (my $foo @urls ) { my $abs_path = "C:/path"; $mech->set_download_directory( $abs_path ); $mech->get($foo); }
        Unfortunately this method works only with files that the browser normally does not load such as (.csv). If you use this method with images (.jpg etc...) or PDFs (.pdf) the documents will load in the browser and the download fails. This exact same task has worked with WWW::Mechanize using the following method
        my $filename = "C:/path/filename"; my $foo = "http link"; $mech->get($foo, ':content_file'=>$filename);
        The workaround that you have suggested below didn't work with PDFs and CSVs. it worked with images only.
        my $response = $mech->get($url); my $c = $mech->content; # Dummy request to initialize everything open my $output, '>:raw', '/tmp/output.jpg'; print { $output } $response->decoded_content;

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11135124]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (6)
As of 2024-04-19 11:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found