llao has asked for the wisdom of the Perl Monks concerning the following question:
I wrote html/Javascript code a few years back to scrape financial data for the company I work for on Yahoo from inside the corporate firewall. It worked well. I didn't think it was pretty but did not know how to use code to access the page directly.
Now I want to re-write it using Perl, but from what sample code I could gather from the net, it always timed out. I figured it must be the firewall because if I replaced the url with a site internal to the firewall, the code works fine.
My question is....the page can be accessed from a browser (IE, Firefox, Chrome) which is using http protocol to bring the page in, so why can't Perl code do the same? I assume that Perl's library is also using the same protocol to do the work. How could the firewall distinguish that one is coming from a browser and one is something else? I tried to set the user-agent to Mozilla, and it still timed out.
I would appreciate your help on this. Is what I am trying to do possible?
The following is my code which is mostly copied from the net:
use strict; use LWP::Simple; use LWP::UserAgent; my $webpage = "http://www.google.com"; my $ua = LWP::UserAgent->new; $ua->agent("Mozilla/4.76 [en] (Windows NT 5.0; U)"); $ua->timeout(30); my $request = HTTP::Request->new('GET', $webpage); my $response = $ua->request($request); my $html = $response->content; print $html; exit(0);
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Code To Get Pages
by no_slogan (Deacon) on Mar 19, 2014 at 20:42 UTC | |
|
Re: Code To Get Pages
by dvinciguerra (Acolyte) on Mar 19, 2014 at 21:56 UTC |