I think it would help to split your problem space conceptually between the scraping and the parsing. As far as scraping is concerned, Selenium is a very good tool for automating multiple browsers and testing against them. If all you need is a single browser, look at, say, WWW::Mechanize::Chrome. But do you actually need a browser? If not, LWP is probably all you need. And Dave Cross is the publisher of that book, not the author.
On to the parsing, I have tried a cut down version of your JSON. My code is:
use strict; use warnings; use JSON::PP; use Data::Dumper; my $scrape = <<EOF; <script> $(function () { var opportunity = new US.Opportunity.CandidateOpportunityDetail({"Id":"10eb1d6c-359b +-4f10-84d0-ca2525d88cce","Title":"Relationship Manager","Featured":fa +lse,"FullTime":true,"HoursPerWeek":null,"JobCategoryName":"Qualified +Client Services","Locations":[{"Id":"dd1188b1-18d2-5e8d-9f93-aadbe1a3 +fd22","LocalizedName":"CA-Remote","LocalizedLocationId":null,"Localiz +edDescription":"CA - Remote"}] }); EOF $scrape =~ m/\((\{.*\})\)/gms; my $json = $1; my $ref = decode_json $json; print Dumper $ref;
Does that give you what you need? If not, you may need to specify your problem more clearly.
Regards,
John Davies
In reply to Re: Quick 'n dirty extraction of JSON from an HTML page
by davies
in thread Quick 'n dirty extraction of JSON from an HTML page
by davebaker
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |