in reply to Question regarding web scraping

Hello Lisa1993, and welcome to the Monastery!

As Corion says, you’d be better off using a dedicated module to extract the HTML you want. But in the meantime...

hippo has identified the syntax errors in your code. But, even when these are fixed, the regular expression won’t match any of the content on the web page in question. To get it to match, I had to tweak it in two places:

use strict; use warnings; use LWP::Simple; my $URL = 'https://www.reddit.com/r/unitedkingdom/comments/58m2hs/ +' . 'i_daniel_blake_is_released_today/'; my $CONTENT = get($URL); my $regex = '<div class="usertext-body may-blank-within md-container + ">' . '<div class="md">(.+?)</div>\s*</div>' . '</form><ul class="flat-list buttons">'; my $x = ''; my $count = 0; while ($CONTENT =~ m{$regex}gs) { $x .= $1; ++$count; } print $x; print "Count: $count\n";

First, you need to allow for (optional) whitespace between the two closing </div> tags. Second, you need to remove the space at the end of the regex. Also note that it isn’t necessary to escape the quotation character, and you can avoid escaping forward slashes by changing the regex delimiter (as shown above).

With these changes, I get 80 matches.

Also note the inclusion of use strict and use warnings, and the declaration of variables using my. This is basic good practice in Perl.

Hope that helps,

Athanasius <°(((><contra mundum Iustus alius egestas vitae, eros Piratica,

Replies are listed 'Best First'.
Re^2: Question regarding web scraping
by Lisa1993 (Acolyte) on Oct 22, 2016 at 15:43 UTC

    That's brilliant! You've made my day! Thank you very much.

    Can I just ask too, is there any way to run the script for multiple URL's at once? Or would I need a more complicated programme for that?

    Thanks again!

      It's trivial by wrapping part of your code in a for() loop, and turning the single scalar $URL link into an array @URLS, that contains a list of urls instead. The for() loop iterates over this list. Note that this assumes the regex is the same for all urls. Untested:

      use strict; use warnings; use LWP::Simple; my @URLS = qw( http://one.example.com http://two.example.com http://three.example.com ); my $regex = '<div class="usertext-body may-blank-within md-container + ">' . '<div class="md">(.+?)</div>\s*</div>' . '</form><ul class="flat-list buttons">'; for my $URL (@URLS){ my $CONTENT = get($URL); my $x = ''; my $count = 0; while ($CONTENT =~ m{$regex}gs){ $x .= $1; ++$count; } print "---$URL---\n"; print $x; print "Count: $count\n"; }
        That's brilliant, thank you!

        Can I also ask (just one last question, sorry!), I've been reading that it's good protocol to slow down requests to avoid the website potentially banning the IP address. I inserted a simple "sleep 60" into the code that you have every kindly written for me. This seems to be working very well and has successfully staggered every request by one minute.

        However, I was then told that the pauses between requests should be random (as opposed to patterned every 60 seconds).

        Do you have any thoughts on this?

        Thanks again for all of your help!