bLIGU has asked for the wisdom of the Perl Monks concerning the following question:
I am looking forward to do an spider/scraping application using Perl. Specifically, I want to take all links (url's) in:
https://matchstat.com/tennis/tournaments/(...)take information of lines like this:
R64 David Ferrer (ESP) 7 Filip Krajinovic (SRB) 1.13 5.50 7-5 7-5 7-6(4) H2H 2-0 (X)And scrap (capture) all the text contained in links (marked here with (X)). The link in that example has this url: https://matchstat.com/tennis/match-stats/m/8305102 After that, I'll use regular expressions to choose parameters in captured information. For me is the easy part of all the process because I usually use regex. I found various web scraping frameworks and lot of Perlmonks/stack discussions and blog posts to do that but I'm a little bit lost. And not much help the fact than I need some spider & scrapt functions...
Here are the long list of perl libraries I found!
WWW::Scripter WWW::Scripter::Plugin::JavaScript WWW::Scripter::Plugin::Ajax Web::Scraper Web::Magic Web::Query Mojo/Mojo::DOM XML::LibXML Mozilla::Mechanize Gtk3::WebKit with WWW::WebKit Gtk2::WebKit Wx::Htmlwindow Wx::WebView scrappy Gungho YADA LWP/HTML::Parser
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: How can I get all URL's of a domain and get text (scraping)?
by Anonymous Monk on Sep 15, 2015 at 23:54 UTC | |
by Anonymous Monk on Sep 16, 2015 at 21:53 UTC | |
|
Re: How can I get all URL's of a domain and get text (scraping)?
by nikosv (Deacon) on Sep 18, 2015 at 21:07 UTC | |
by Anonymous Monk on Sep 18, 2015 at 22:45 UTC |