If you decide to run the script here's how: perl naive_spider.pl 2 "http://code.google.com/hosting/"#!/usr/bin/perl use strict; use warnings; #recursive spider web starting from a page and #specifying depth level #like wget -r --level=... my ($depth_level,$start_page) = @ARGV; exit 1 unless $start_page;#exits if it doesn't have a parameter page t +o download if( !defined $depth_level || $depth_level > 0 ) { my $page_content=`curl $start_page 2>&1`;#takes contents of the pa +ge my @links = $page_content =~ /<a href=([^ ]*?)>/g;#takes out the l +inks from the page for(@links) { print "Working on link $_\n"; my $new_call = "perl naive_crawler.pl ". ($depth_level - 1) ." + $_";#new call for the script with the links from the page `$new_call`; }; }
In reply to unix perl web spider by spx2
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |