http://qs1969.pair.com?node_id=501486


in reply to WWW::Mechanize problem

WWW::Mechanize's links() method returns an array of WWW::Mechanize::Link objects. You can use those objects to follow the URLs like this:

my( @links ) = $mech->links(); foreach my $link ( @links ) { my $temp_mech = WWW::Mechanize->new(); $temp_mech->get( $link ); # Do whatever you want now... }

Dave

Replies are listed 'Best First'.
Re^2: WWW::Mechanize problem
by Anonymous Monk on Oct 20, 2005 at 11:14 UTC
    Cool, Thanks,
    My problem is that I cant figure out how to recurse this, so that I am visiting each link on the site. Can you offer any pointers there?
      Anonymous Monk,
      First of all, you probably want to verify that your code doesn't conflict with any of the site's policies. Even if you are "ok", you likely want to sleep in between page fetches like a good net citizen. Ok, now on to your question of recursion.

      This can easily turn into an infinite loop so it may be important to keep track of where you have already visited. I would suggest using a stack/queue approach along with a %seen cache. The following is an illustration of what I mean:

      # mechanize fetching of first page my %seen; my @links = $mech->links(); while ( @links && @links < 1_000 ) { my $link = shift @links; my $url = $link->url() next if $seen{$url}++; # mechanize fetch of $url push @links, $mech->links; sleep 1; }
      This will prevent you from fetching the same url and it will stop when you have no more links to visit or you find the site had far too many links to follow then you intended. The 1000 was an arbitrary limit and need not be there at all. You can change between depth first and breadth first by ajusting push/unshift and shift/pop.

      Cheers - L~R

        HI,

        Thank you for your help.
        Can you explain how this method can be usedd to walk up and down the links?
        I dont think i understand how the link depth comment you made works.
        Thanks again