Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re: Forcing WWW::Mechanize to timeout

by Marshall (Canon)
on Apr 12, 2022 at 23:32 UTC ( [id://11142950]=note: print w/replies, xml ) Need Help??


in reply to Forcing WWW::Mechanize to timeout

Hi Stevieb!

As a suggestion, your code that mentions "500" could be simplified.

for (1 .. API_ERRORS) { # If a timeout (ie. code 500) occurs, or # some kind of other error, repeat the API call $response = $self->mech->request($request); if ($response->is_success) { # sucessful processing code here... return $response_data; } # all non-success code cause pause, then re-try # elsif ($response->code == 500) { # next; #} sleep(1); }
Some sites will give a 404 or whatever and then a repeat of the same request will succeed. I don't see the need to differentiate between 500 and some other kind of error, at least not in the code that you show. For some sites, I have found that a brief pause "sleep (1)" helps. Sometimes in very, very rare situations, mechanize just gets "stuck" and no amount of retries will fix it.

Update: I looked back at some code from ~8 years ago:
I like your code better, the question should be how to simulate anything other than success rather than just a 500 error code.

my $m2= WWW::Mechanize->new(); # New Mechanize Object for detailed res +ults # may help save memory?? TBD... # update (it did) my $success=0; my $tries=0; while (! $success and $tries++ < 10) { eval { $m2->get($fullurl); }; if (! $@) { $success = 1; } else { print STDERR "Error: Retry Attempt $tries of 10\n"; print LOG "Error: Retry Attempt $tries of 10\n"; sleep (3); } } die "aborted Web Site Error: $!" unless $success; #ultimate failure!! +PROGRAM ABORT !!!!
This probably is not typical, but in my app, a website error happens about once per 2,000 requests. One retry is almost always sufficient. I have never seen a successful attempt on the 3rd retry. "ultimate failure" happens way less than once in a million website accesses, but it does happen. I have no explanation for that. This is a "chron" job and it will run again once per hour and pick up where it left off. Anyway there is more than just "time out" or "success". Other error codes can and do come back and more likely than not, a simple retry will fix them.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11142950]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (5)
As of 2024-04-23 21:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found