in reply to Re: Question regarding web scraping
in thread Question regarding web scraping

That's brilliant! You've made my day! Thank you very much.

Can I just ask too, is there any way to run the script for multiple URL's at once? Or would I need a more complicated programme for that?

Thanks again!

Replies are listed 'Best First'.
Re^3: Question regarding web scraping
by stevieb (Canon) on Oct 22, 2016 at 16:22 UTC

    It's trivial by wrapping part of your code in a for() loop, and turning the single scalar $URL link into an array @URLS, that contains a list of urls instead. The for() loop iterates over this list. Note that this assumes the regex is the same for all urls. Untested:

    use strict; use warnings; use LWP::Simple; my @URLS = qw( http://one.example.com http://two.example.com http://three.example.com ); my $regex = '<div class="usertext-body may-blank-within md-container + ">' . '<div class="md">(.+?)</div>\s*</div>' . '</form><ul class="flat-list buttons">'; for my $URL (@URLS){ my $CONTENT = get($URL); my $x = ''; my $count = 0; while ($CONTENT =~ m{$regex}gs){ $x .= $1; ++$count; } print "---$URL---\n"; print $x; print "Count: $count\n"; }
      That's brilliant, thank you!

      Can I also ask (just one last question, sorry!), I've been reading that it's good protocol to slow down requests to avoid the website potentially banning the IP address. I inserted a simple "sleep 60" into the code that you have every kindly written for me. This seems to be working very well and has successfully staggered every request by one minute.

      However, I was then told that the pauses between requests should be random (as opposed to patterned every 60 seconds).

      Do you have any thoughts on this?

      Thanks again for all of your help!

        See rand.

        Also, it's good practice to delay every next process by at least as much as the previous request took; see time for getting the current time to determine the start and end time of a request.