alialialia has asked for the wisdom of the Perl Monks concerning the following question:

Hello! I'm trying to figure out how to scrape a website in order to analyze the data. I want to get the links on the first page and the descriptions within each follow url onto one xml file. My code doesn't seem to work....

#!/usr/bin/perl/ -w my $output_folder_html = "/Users/*********html.rtf"; my %follow_urls = ("https://www.dogfoodadvisor.com/dog-food-reviews/br +and" => 1); my $no_of_follow_urls = 1; my %all_take_urls; my %already_followed_urls; my $basis_url = "https://www.dogfoodadvisor.com/dog-food-reviews/"; my $iteration_counter = 0; while ($no_of_follow_urls > 0) { # creating output to show progress $iteration_counter++; my $no_of_take_urls = keys %all_take_urls; print "------------------------------------------------------\nIte +ration $iteration_counter: $no_of_take_urls take-urls found so far!\n +"; # downloading follow-urls my %new_follow_urls; foreach my $follow_url (keys %follow_urls) { print "\nAnalyzing $follow_url ...\n"; $already_followed_urls{$follow_url} = 1; my $html = qx (curl "$follow_url"); # check each hypertext link within page my @html = split(/a href=/, $html); foreach my $link (@html) { if ($link =~ m/^quedisplay.html\?aTYPE=([0-9]+?)&aPAGE=([0 +-9]+)/) { my $follow_url = $basis_url . "quedisplay.html?aTYPE=" + . $1 . "&aPAGE=" . $2; $new_follow_urls{$follow_url} = 1; } elsif ($link =~ m/^quedisplay.html\?aTYPE=([0-9]+)/) { my $follow_url = $basis_url . "quedisplay.html?aTYPE=" + . $1; $new_follow_urls{$follow_url} = 1; } elsif ($link =~ m/^quereadisplay.html\?0\+([0-9]+)/) { my $take_url = $basis_url . "quereadisplay.html?0+" . +$1; $all_take_urls{$take_url} = 1; } } } # check, if new follow urls have been found undef (%follow_urls); print "\nnew follow links:\n"; foreach my $follow_url (keys %new_follow_urls) { unless (defined $already_followed_urls{$follow_url}) { $follow_urls{$follow_url} = 1; print "\t$follow_url\n"; } } # check number of new follow pages $no_of_follow_urls = keys %follow_urls; } # download all take-files as html my $counter = 0; foreach my $take_url (keys %all_take_urls) { my $html = qx (curl "$take_url"); # saves html to file my $output_file = $outputfolder_html . $take_url . ".html"; open OUT, "> $output_file"; print OUT "$html"; close OUT; }

Replies are listed 'Best First'.
Re: Scraping a website
by marto (Cardinal) on Jul 31, 2018 at 04:45 UTC

    Adding use strict;:

    Global symbol "$outputfolder_html" requires explicit package name (did + you forget to declare "my $outputfolder_html"?) at dog.pl line 67. Execution of dog.pl aborted due to compilation errors.

    That aside you've really over complicated this, however a quick glance at the terms of use, the site owners don't want to access their site in this manner.

      Thank you for your help, I wasn't aware so thank you for letting me know.

        Some further comments. Use the three argument form of open:

        open(my $fh, ">", "output.txt") or die "Can't open > output.txt: $!";

        Detailed explanation.

        In general I think you've over complicated the problem. I'd avoid using curl and parsing the results with a regex. Look at Mojo::DOM, Mojo::UserAgent and Super Search for examples of both. Very powerful tools that make this sort of work trivial.

Re: Scraping a website
by markong (Pilgrim) on Jul 31, 2018 at 11:44 UTC
    # check each hypertext link within page my @html = split(/a href=/, $html);

    A recommendation: you are doing a lot of extra work to collect URLs and save the relative content, the code is a bit verbose and you could still miss something; peruse "standard" tools to help yourself:

    1. HTML::LinkExtor - Extract links from an HTML document
    2. LWP::UserAgent - Web user agent class - look at its get(...) method and in particular to its :content_file => $filename parameter

    This should simplify things and help a lot