And now a second bit of help, possibly a lot bigger of a bit than previously.

I'm not familiar with HTML::LinkExtor, and I really don't use LWP::UserAgent these days either, so I wrote something taking advantage of my personal favorite for anything webpage related, WWW::Mechanize.

I also never quite understood your original algorithm. If it were me (and in this case it is) I'd keep track of urls (and weeding out duplicates) for a given link depth on my own, in my own data structure, as opposed to inserting things into a database and fetching them back out to re-crawl them.

I'm also not clear on your specs as to whether or not you want urls that are off-site. The logic for the way this program handles that is pretty clearly documented, so if it isn't to your spec, adjust it.

Having said all that, here is a recursive link crawler. (Though now that I type out "recursive link crawler", I can't help but imagine that this hasn't been done before, and I'm certain a search would turn one up fairly quickly. Oh well.)

#!/usr/bin/perl use strict; use warnings; use WWW::Mechanize; my $url = shift || die "Please pass in base url as argument to $0\n"; my %visited; my @links; my $max_depth = 3; my $depth = 0; my $mech = WWW::Mechanize->new(); # This helps prevent following off-site links. # Note, assumes that url's passed in will represent the # highest level in a website hierarchy that will be visited. # i.e. http://www.example.com/dir/ will record a link to # http://www.example.com/, but will not follow it and report # subsequent links. my( $base_uri ) = $url =~ m|^(.*/)|; get_links( $url ); sub get_links { my @urls = @_; my @found_links; for( @urls ){ # This prevents following off-site or off-parent links. next unless m/^$base_uri/; $mech->get( $_ ); # Filters out links we've already visited, plus mailto's and # javascript:etc hrefs. Adjust to suit. @found_links = grep { ++$visited{$_} == 1 && ! /^(mailto|javascrip +t)/i } map { $_->url_abs() } $mech->links(); push @links, @found_links; } # Keep going, as long as we should. get_links( @found_links ) if $depth++ < $max_depth; } # Instead of printing them, you could insert them into the database. print $_ . "\n" for @links;

Inserting the links into a database is left as an exercise for the reader.



--chargrill
s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; = qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)

In reply to Re^5: How to extract links from a webpage and store them in a mysql database by chargrill
in thread How to extract links from a webpage and store them in a mysql database by syedahmed.uos

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.