in reply to Crawling all urls on a site
Using LWP::Simple you are able to first get the web page like so:
$src = get($pageBase);Then you can use regex to get the links within that source. You will have to check for links that use ' and ". For example:
my @pageLinks = (); push @pageLinks, $1 for $src =~ /<a href='([^']+)'/gs; push @pageLinks, $1 for $src =~ /<a href="([^"]+)"/gs;
One thing you will have to look out for is whether a link is relative or absolute. Unfortunately, this could make it extremely difficult to check whether a link has already been visited. If someone is using the syntax of .. within their links (I know not a good idea, but it has been done) then you will have to account for that in determining the actual link before adding it to your hash.
However, I would recommend using one of the other modules on CPAN. They have generally been well tested and are often the most efficient ways of doing something. One that you can take a look at is WWW::SimpleRobot.
Update: One more thing of note, if you will be crawling sites make sure to give a decent delay in between gets. If all links you will be crawling are on the same server, pounding it with requests will not be appreciated at all.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Crawling all urls on a site
by Cody Pendant (Prior) on Feb 20, 2005 at 11:12 UTC |