Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re^2: WWW::Mechanize problem

by Anonymous Monk
on Oct 20, 2005 at 11:14 UTC ( [id://501620]=note: print w/replies, xml ) Need Help??


in reply to Re: WWW::Mechanize problem
in thread WWW::Mechanize problem

Cool, Thanks,
My problem is that I cant figure out how to recurse this, so that I am visiting each link on the site. Can you offer any pointers there?

Replies are listed 'Best First'.
Re^3: WWW::Mechanize problem
by Limbic~Region (Chancellor) on Oct 20, 2005 at 12:34 UTC
    Anonymous Monk,
    First of all, you probably want to verify that your code doesn't conflict with any of the site's policies. Even if you are "ok", you likely want to sleep in between page fetches like a good net citizen. Ok, now on to your question of recursion.

    This can easily turn into an infinite loop so it may be important to keep track of where you have already visited. I would suggest using a stack/queue approach along with a %seen cache. The following is an illustration of what I mean:

    # mechanize fetching of first page my %seen; my @links = $mech->links(); while ( @links && @links < 1_000 ) { my $link = shift @links; my $url = $link->url() next if $seen{$url}++; # mechanize fetch of $url push @links, $mech->links; sleep 1; }
    This will prevent you from fetching the same url and it will stop when you have no more links to visit or you find the site had far too many links to follow then you intended. The 1000 was an arbitrary limit and need not be there at all. You can change between depth first and breadth first by ajusting push/unshift and shift/pop.

    Cheers - L~R

      HI,

      Thank you for your help.
      Can you explain how this method can be usedd to walk up and down the links?
      I dont think i understand how the link depth comment you made works.
      Thanks again
        Anonymous Monk,
        Sometimes, it helps to stop thinking about the problem at hand. You get too close to the problem and you miss the forest through the trees. The process I am about to explain can be applied to any similar problem and is not unique to your situation. Once you understand that - you have the tools you need to solve it yourself in the future.

        Problem Description:

        You start out with some number of jobs to perform. In working on those jobs, you discover that you have new jobs to work on. The total number of jobs to perform can't be known ahead of time. Additionally, it is possible for one job to lead to another job which leads back to the original job. If we think of the number of known jobs at any given time as a stack or a queue, then we know we can stop work when it is empty.
        my @work = fetch_jobs($starting_condition); while ( @work ) { # ... }

        When we evaluate an array in this context, it will be false when the array is empty. The while loop will terminate because all work has been completed. Inside the loop, we add to our known work queue/stack by checking to see if the job leads to more work.

        my @work = fetch_jobs($starting_condition); while ( @work ) { # Remove 1 item from our stack/queue my $job = shift @work; # Possibly add new jobs to our stack/queue if ( more_jobs($job) ) { push @work, fetch_jobs($job); } # process job }
        We now need to consider that one job may lead back to itself and break the infinite loop.
        my @work = fetch_jobs($starting_condition); my %seen; while ( @work ) { # Remove 1 item from our stack/queue my $job = shift @work; # Skip this job if we have already done it next if $seen{$job}++; # Possibly add new jobs to our stack/queue if ( more_jobs($job) ) { push @work, fetch_jobs($job); } # process job }
        We can additionally decide to abandon our work if we discover that our queue/stack has grown larger than we anticipated. We rely on the fact that when an array is evaluated in scalar context it returns the number of elements present.
        my @work = fetch_jobs($starting_condition); my %seen; while ( @work && @work < 1000 ) { # ... }
        Now it may be important to process the work in a specific order. A depth first approach is when one job leads to another job which leads to another job and they need to be processed it that order. A breadth first approach is when secondary and tertiary jobs are only executed after all primary jobs are complete. The way to control this is by adjusting what end of the stack/queue you take work off and put on. See push, pop, shift, unshift for more details.

        Cheers - L~R

        Update: Minor oversights corrected per the astute herveus.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://501620]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (5)
As of 2024-04-24 10:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found