as soon as the previous action completes,
Welcome to the world of asynchronous programming.
Which is further complicated by the fact that you are doing all by proxy and the AJAX calls on the server side which often leave the browser in the same page, so you can't even inject javascript to check for page.onLoadFinished or document.readyState as the page is already loaded. But there are hacks on a per-site basis to help you achieve what you want. For example, for your specific case I have noticed that when click() happens you are either redirected to a new URL or a textbox fills in with an error message. Here is your answer then:
use strict;
use warnings;
use WWW::Mechanize::PhantomJS;
my $mech = WWW::Mechanize::PhantomJS->new();
$mech->get('https://profile.ccli.com/account/signin?appContext=OLR&ret
+urnUrl=https%3A%2F%2Freporting.ccli.com%2F');
$mech->field( EmailAddress => 'me@test.com' );
$mech->field( Password => 'mypw' );
my $ori_uri = $mech->uri;
print "clicking from '$ori_uri' ... \n";
my $x = $mech->click_button( id => 'sign-in' );
my $maxtime = 50; # do this for a max of 50 seconds
my $success;
while($maxtime-->0){
print "checking if uri has changed from original : '".$mech->uri."
+' ...\n";
if( $mech->uri ne $ori_uri ){ $success = 1; last }
# similarly check for the contents of the error textbox otherwise
sleep(1);
}
die "something went wrong and never left the page..." unless $success;
print "finally left and now in this uri : ".$mech->uri."\n";
$mech->render_content(
format => 'png',
filename => 'ccli_login.png'
);
print "done!\n";
Personally, I rarely, if ever, use Mechanize for scraping. Instead, I first try checking the http requests on the site and then emulate them using LWP::UserAgent. Even the most convoluted javascript-driven, ajax-calling click() will eventually resort to some POST or GET which you can grab by opening the web-devolpemnt-tools on Firefox and observe the Network tab. (I have even managed to do that with a "Microsoft Power BI" site which is the epitomy of twisted perversion conceived by a mind descendent by a lock between the von Masoch and Marquis de Sade families.). But there is now an increasing number of sites which offer a (REST) API to their services which you can harvest again with LWP::UserAgent or other similar e.g. Mojo::UserAgent.
And that brings us to your "screenshot". If you insist on getting a screenshot (i.e. render the html received) then it will be very difficult to do that with this technique. Because this technique provides you with the page's content (as HTML+JS or JSON) but does not render it neither it runs any JS. So you will probably be able to get a "last-login-time" field out of the data, or a picture of your avatar but you will not, most likely, be able to render that HTML you received unless it is some straightforward case. But the HTML will contain all you need which you can parse using a DOM parser, e.g. HTML::TreeParser or Mojo::DOM.
bw, bliako |