memwaster has asked for the wisdom of the Perl Monks concerning the following question:

Hi

This is my first attempt at a perl program that does anything vaguely useful - it is a web crawler that currently just prints out the urls that it visits, but may later look for some specific content. Some of the code I pinched from an example program. The problem is that it uses up 2Gb of memory in about 10 minutes and I cannot figure out why this is. I have probably done something fundamentally wrong. Can anyone spot an obvious problem, or offer ideas of how to troubleshoot this? Thanks.

Oh, the usage is basically "./crawler.pl www.mysite.com mysite.com" where the second argument restricts the links it follows to avoid crawling sites other than the target.

#!/usr/bin/perl use strict; use warnings; use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; my $site = shift @ARGV; my $domain = shift @ARGV; my $firsturl = "http://$site"; my $ua = LWP::UserAgent->new; my @links = (); my @newlinks = (); my @visited = (); my $newlink = ""; my $link = ""; push (@links, $firsturl); # Set up a callback that collects links sub callback { my($tag, %attr) = @_; return if $tag ne 'a'; # we only look closer at <a ...> push(@newlinks, values %attr); } # Make the parser my $p = HTML::LinkExtor->new(\&callback); # The main loop MAIN: foreach my $url (@links) { # Skip if we have been here before foreach my $inside (@visited) { #print "skipping $url\n" if $url eq $inside; next MAIN if $url eq $inside; } # Request document and parse it as it arrives print "visiting $url\n"; my $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Remember that we have visited this url push (@visited, $url); # Expand all URLs to absolute ones my $base = $res->base; @newlinks = map { $_ = url($_, $base)->abs; } @newlinks; # Reduce the links to only ones in our domain foreach $newlink (@newlinks) { if ($newlink =~ /$domain/) { push (@links, $newlink); } } }

Replies are listed 'Best First'.
Re: Newbie memory leak
by ikegami (Patriarch) on Jul 25, 2007 at 16:35 UTC
    • @newlinks is not cleared between documents.
    • Any data HTML::LinkExtor accumulates is not cleared between documents.
    • @links is ever growing.

    Unrelated problems:

    • Using foreach to loop over an ever-growing array is not recommended. ("If any part of LIST is an array, foreach will get very confused if you add or remove elements within the loop body.") Use a while loop instead.
    • When using $p->parse, you need to end with $p->eof.
    • The "skipping" check should be done on absolute uris.
    • @visited should be a hash to avoid the "skipping" loop.
    • /$domain/ is not a safe method to accomplish the desired option.
    • URI should be used instead of URI::URL.
    • $newlink should be declared where it's used.

    Update: Added the foreach problem.
    Update: Added the $p->eof problem.

      Here is the code with the above suggestions applied.

      #!/usr/bin/perl use strict; use warnings; use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; my $site = shift @ARGV; # Why not just take an uri as argument? my $domain = shift @ARGV; # Should really be an uri. # Assume it's a HTTP server if not. if ($site !~ /^\w+:/) { $site = "http://site"; } my $first_uri = URI->new($site)->canonical; my @to_visit = ( $first_uri ); my %seen = map { $_ => 1 } @to_visit; my $ua = LWP::UserAgent->new(); while (@to_visit) { my $uri = shift(@to_visit); # Make the parser my $p = HTML::LinkExtor->new(sub { my ($tag, %attr) = @_; # Only interested in A elements. return if $tag ne 'a'; # Only interested in the HREF attribute. return if not exists $attr{href}; my $link_uri = URI->new_abs($attr{href}, $uri)->canonical; # Ignore links outside of the domain. return if $link_uri->rel($first_uri) eq $link_uri; # Ignore links already in the queue and links already visited. return if $seen{$link_uri}++; push @to_visit, $link_uri; }); my $response = $ua->request( HTTP::Request->new(GET => $uri), sub { $p->parse($_[0]) } ); $p->eof; } my @links = keys %seen;

      You should use LWP::RobotUA instead of LWP::UserAgent for this kind of application.

      Untested.

      Updated: The call to request was accidently removed! oops. Readded. Added $p->eof.

        Thanks ikegami for your suggestions and code. Quite a few pointers for improvements there. I was aware of the need to respect robots.txt so you have helped me there too.

        Thanks again.