Here's your first bit of help: I've added use strict; and use warnings; near the top and fixed the resultant errors/warnings. I've run your code through perl tidy (a few times) and fixed up some other errors introduced. I fixed a few quoting and commenting issues introduced possibly by cut-n-paste errors, but definitely by your broken database connection string, which I fixed. I've also terminated your while and for loops (near the end) which may or may not be the right spot to terminate them - I can't actually fully run your code since I don't have all the modules installed nor a database handy at the moment.
I have run it once and passed it a base URL, it spit out a few of the links on the page, so I suppose it's doing something properly.
Try running this modified version and see what happens. If you make any changes, please format it for readability before posting again.
#!/usr/bin/perl
use strict;
use warnings;
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
use DBI();
my $url = <>; # for instance
#my $depth = 0;
my @link = ();
my $ua = LWP::UserAgent->new; # Set up a callback that collect li
+nks
my @a = ();
sub callback {
my( $tag, %attr ) = @_;
return if $tag ne 'a';
push( @a, values %attr );
} # Make the parser.Unfortunately, we don't know the base yet (it migh
+t be
#diffent from $url)
my $p = HTML::LinkExtor->new( \&callback );
my $res = $ua->request( HTTP::Request->new( GET => $url ),
sub { $p->parse( $_[0] ) } )
; # Expand all image URLs to absolute ones
my $base = $res->base;
@a = map { $_ = url( $_, $base )->abs; } @a; # Print them out
print join( "\n", @a ), "\n";
my $dbh = DBI->connect( "DBI:mysql:database=gatxp;host=\"\"", "", "" )
+;
#$dbh->do(" CREATE TABLE newlinks( md5 INTEGER(100) not null "
# ."primary key, webpage VARCHAR(80) not null) ");
$dbh->do(" INSERT INTO newlinks VALUES( 'MD5', '0', '$base', '1' ) ");
foreach $a (@a) {
$dbh->do(" INSERT INTO newlinks VALUES( '', '1', '$a', '0' ) ");
}
my $sth = $dbh->prepare('SELECT * FROM newlinks')
or die " Couldn't prepare statement : " . $dbh->errstr;
$sth->execute();
while( my $ref = $sth->fetchrow_hashref() ) {
my $link = $ref->{'webpage'};
foreach $link (@link) {
my $usa = LWP::UserAgent->new;
$p = HTML::LinkExtor->new( \&callback );
my $res = $usa->request( HTTP::Request->new( GET => $link ),
sub { $p->parse( $_[0] ) } );
$base = $res->base;
@link = map { $_ = url( $_, $base )->abs; } @link; # Print them
+ out
print "$$link \n ";
}
}
$sth->finish();
$dbh->disconnect();
HTH
--chargrill
s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; =
qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)
| [reply] [d/l] [select] |
And now a second bit of help, possibly a lot bigger of a bit than previously.
I'm not familiar with HTML::LinkExtor, and I really don't use LWP::UserAgent these days either, so I wrote something taking advantage of my personal favorite for anything webpage related, WWW::Mechanize.
I also never quite understood your original algorithm. If it were me (and in this case it is) I'd keep track of urls (and weeding out duplicates) for a given link depth on my own, in my own data structure, as opposed to inserting things into a database and fetching them back out to re-crawl them.
I'm also not clear on your specs as to whether or not you want urls that are off-site. The logic for the way this program handles that is pretty clearly documented, so if it isn't to your spec, adjust it.
Having said all that, here is a recursive link crawler. (Though now that I type out "recursive link crawler", I can't help but imagine that this hasn't been done before, and I'm certain a search would turn one up fairly quickly. Oh well.)
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;
my $url = shift || die "Please pass in base url as argument to $0\n";
my %visited;
my @links;
my $max_depth = 3;
my $depth = 0;
my $mech = WWW::Mechanize->new();
# This helps prevent following off-site links.
# Note, assumes that url's passed in will represent the
# highest level in a website hierarchy that will be visited.
# i.e. http://www.example.com/dir/ will record a link to
# http://www.example.com/, but will not follow it and report
# subsequent links.
my( $base_uri ) = $url =~ m|^(.*/)|;
get_links( $url );
sub get_links {
my @urls = @_;
my @found_links;
for( @urls ){
# This prevents following off-site or off-parent links.
next unless m/^$base_uri/;
$mech->get( $_ );
# Filters out links we've already visited, plus mailto's and
# javascript:etc hrefs. Adjust to suit.
@found_links = grep { ++$visited{$_} == 1 && ! /^(mailto|javascrip
+t)/i }
map { $_->url_abs() }
$mech->links();
push @links, @found_links;
}
# Keep going, as long as we should.
get_links( @found_links ) if $depth++ < $max_depth;
}
# Instead of printing them, you could insert them into the database.
print $_ . "\n" for @links;
Inserting the links into a database is left as an exercise for the reader.
--chargrill
s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; =
qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)
| [reply] [d/l] [select] |
hello
wish u a happy new year
thanks for the help !!!
just want to ask when i set the $max depth variable to 3 or 2 it gives me the same output.
| [reply] |
| [reply] [d/l] |
use WWW::Mechanize; module ...
| [reply] |