Quick fix: Open Firefox's Developer Tools, go to the Network tab and observe all the transactions which happen during loading. One of them is requesting the HTML for the actual contents of that page, the 1892 theses (alas only mathematics is immortal), something like this:
http://operedigitali.lincei.it/rendicontiFMN/rol/visart.php?lang=it&type=mat&serie=5&anno=1892&volume=1The longer way which is the "proper" way IMO is to do what Corion suggested and use WWW::Mechanize::Chrome (I am not acquainted with WWW::Mechanize::PhantomJS). This instructs google chrome browser to get the web-page and then asks it to provide Perl with the DOM. Then it's straight forward with XPath selectors to reach and suck out the desired div's contents.
bw, bliako
In reply to Re: Scraping Javascript page using perl
by bliako
in thread Scraping Javascript page using perl
by Bpl
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |