bLIGU has asked for the wisdom of the Perl Monks concerning the following question:

I am looking forward to do an spider/scraping application using Perl. Specifically, I want to take all links (url's) in:

https://matchstat.com/tennis/tournaments/(...)

take information of lines like this:

R64    David Ferrer (ESP) 7    Filip Krajinovic (SRB)    1.13    5.50    7-5 7-5 7-6(4)    H2H 2-0     (X)

And scrap (capture) all the text contained in links (marked here with (X)). The link in that example has this url: https://matchstat.com/tennis/match-stats/m/8305102 After that, I'll use regular expressions to choose parameters in captured information. For me is the easy part of all the process because I usually use regex. I found various web scraping frameworks and lot of Perlmonks/stack discussions and blog posts to do that but I'm a little bit lost. And not much help the fact than I need some spider & scrapt functions...

Here are the long list of perl libraries I found!

WWW::Scripter WWW::Scripter::Plugin::JavaScript WWW::Scripter::Plugin::Ajax Web::Scraper Web::Magic Web::Query Mojo/Mojo::DOM XML::LibXML Mozilla::Mechanize Gtk3::WebKit with WWW::WebKit Gtk2::WebKit Wx::Htmlwindow Wx::WebView scrappy Gungho YADA LWP/HTML::Parser

Replies are listed 'Best First'.
Re: How can I get all URL's of a domain and get text (scraping)?
by Anonymous Monk on Sep 15, 2015 at 23:54 UTC

    ...I found various web scraping frameworks and lot of Perlmonks/stack discussions and blog posts to do that but I'm a little bit lost...

    Pick three, try them out, see what happens

    The State of Web spidering in Perl

      Yeees. OK! But previous experience can help to don't waste time...
Re: How can I get all URL's of a domain and get text (scraping)?
by nikosv (Deacon) on Sep 18, 2015 at 21:07 UTC
    are people still doing scraping manualy? Try an automated solution
    https://www.kimonolabs.com/

    What you get for free?
    *picking visually the fields you are interested in
    *tweaking the query with CSS selectors
    *an API and export captured data in a variety of formats
    *schedule when the crawling should run

      are people still doing scraping manualy? Try an automated solution

      Right, sure, "automated" in that its a program you write through visual interface, that runs on schedule ... like any program you might write using your fingers