I have some code on a Yahoo Small Business server, to scrape a http:// webpage.
The key code is here:
use lib "../lib4";
use URI;
use Web::Scraper;
In ../lib4 I upload the perl modules (.pm) from my laptop that are required.
Keeping in mind that:
1- All code and Perl Modules must reside on a Yahoo server
2- No additional page garbage (id, password, cookies, etc.) is required.
Does anyone have a simple way to scrape an https:// url?
Edit - By simple I mean something NOT dependent on linked libraries or multiple perl modules. I don't need to break through any SSL security, just need the raw HTML from an https page
Note: WWW::Mechanize does not work, as I can't get all the Perl Module files uploaded to the Yahoo Server.
I have code that works locally.
use strict;
use Web::Scraper;
use WWW::Mechanize;
my $url = 'https://finance.yahoo.com/quote/SPY/history?p=SPY';
my $m = WWW::Mechanize->new();
$m->get($url);
my $testdata = scraper {process "tr > td", 't1[]' => 'TEXT';};
my $res = $testdata->scrape(URI->new($url));
print "\nURL is: " . $m->uri() . "\n";
for(my $a=0;$a<6;$a++){print $res->{t1}[$a] . "\t";}
But I can't seem to get all the modules I need uploaded to make it run on Yahoo...
Edit -> It blows up on LWP which does requires XSLoader or DynaLoader (see lower for exact software error) which means it's linking to a library it cannot find.I've been working on this for about 3 weeks and need to reach out for guidance please.
So I guess I am looking for a clever way of scraping https without using Mechanize