Lisa1993 has asked for the wisdom of the Perl Monks concerning the following question:
Hello Monks,
I posted here a couple of weeks ago about a problem that I was having writing a simple web-scraping programme that scraped comments from the website Reddit, and allowed me to analyse them using computational linguistics methods.
Following the very helpful advice of the forum members here, I revised my code and switched from using LWP::Simple to Mojo:UserAgent.
The following code does almost exactly what I need it do (i.e. it downloads the comments and stores them in a way that is readable for my corpus software.
use Mojo::UserAgent; my $url ='https://www.reddit.com/r/unitedkingdom/comments/58m2hs/i_dan +ie+l_blake_is_released_today/.json'; my $ua = Mojo::UserAgent->new; my $data = $ua->get( $url )->res->json; foreach my $comment ( @{$data} ) { foreach my $child ( @{ $comment->{'data'}->{'children'} } ) { #output path needs changing open(OUT, ">>C:/Users/user/perl_tests/redresults221.txt"); my $yprint = $child->{'data'}->{'body'} . "\n" if( $child->{'d +ata'}->{'body'} ); print OUT "$yprint"; close(OUT); } }
However, I need to add two more elements to the code:
1) I need it to work so that it downloads multiple URLs either by a) letting me make a list of URLs to download within the code (e.g. through some kind of my @URLS = command) or b) the code itself can go into a separate .txt file containing the URL's that I am interested in and then run the existing code upon them (if that makes sense!?!)
2) I need to put a delay or sleep element in the code so that my IP address does not get flagged by the site. In my original programme I used the command sleep(int(rand(30))); should this still work with the Mojo:UserAgent library?
Thanks in advance for your help, it is really appreciated. What you guys do for us beginners is pretty extraordinary!
EDIT: In my original post I should have made it clear that the code that I posted was very kindly written by marto, I never meant to imply that I had wriiten it myself, but I can understand that my wording was very careless.
I now have a working code for my problem, based on three codes that were generously written for me by marto, Athanasius and stevieb:
use Mojo::UserAgent; my @urls = qw( https://www.example1.com.json https://www.example2.com.json https://www.example3.com.json ); for my $URL (@urls){ my $ua = Mojo::UserAgent->new; my $data = $ua->get( $URL )->res->json; sleep(int(rand(60))); foreach my $comment ( @{$data} ) { foreach my $child ( @{ $comment->{'data'}->{'children'} } ) { #output path needs changing open(OUT, ">>C:/Users/user/perl_tests/redresults805.txt"); my $yprint = $child->{'data'}->{'body'} . "\n" if( $child->{'d +ata'}->{'body'} ); print OUT "$yprint"; close(OUT); } } }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: multiple URL web scraping
by haukex (Archbishop) on Nov 02, 2016 at 14:21 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 08:09 UTC | |
|
Re: multiple URL web scraping
by Corion (Patriarch) on Nov 02, 2016 at 14:14 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 08:45 UTC | |
by haukex (Archbishop) on Nov 03, 2016 at 09:08 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 09:23 UTC | |
|
Re: multiple URL web scraping
by marto (Cardinal) on Nov 02, 2016 at 14:25 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 08:49 UTC | |
by marto (Cardinal) on Nov 03, 2016 at 09:31 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 09:39 UTC | |
|
Re: multiple URL web scraping
by Anonymous Monk on Nov 02, 2016 at 14:41 UTC | |
by Lisa1993 (Acolyte) on Nov 03, 2016 at 08:08 UTC |