in reply to Simple link extraction tool

How am I suppose to use your program?

Suggestions:

Suggestions applied:

use strict; use warnings; use List::MoreUtils qw( uniq ); use WWW::Mechanize qw( ); # usage: linkextractor http://www.blah.com/ > listurls.txt my ($url) = @ARGV; my $mech = WWW::Mechanize->new(); my $response = $mech->get($url); $response->is_success() or die($response->status_line() . "\n"); print map { "$_\n" } sort { $a cmp $b } uniq map { $_->url_abs() } $mech->links();

Update: At first, I didn't realize it was outputing to STDOUT in addition to listurls.txt. I recommended that the output should be sent to STDOUT. This is a rewrite.

Replies are listed 'Best First'.
Re^2: Simple link extraction tool
by Scott7477 (Chaplain) on Jan 02, 2007 at 23:38 UTC
    Thanks for taking the time to educate me and produce working code per your suggestions. Prior to posting my code, what I found with Super Search was that any queries regarding the existence of code like this simply got referred to CPAN modules; which was mildly suprising as many SOPW's get responses with code snippets that solve their problem.

    I later found brian d. foy's Re: Creating a web crawler (theory) which points to his webreaper which is apparently designed to download entire websites.

      One of the things you want to do when previewing a post is check that all your links go where you meant them to go. If you had done this, you would have found that your "webreaper" link doesn't work. You could have even simply copied the link from the source node: webreaper.

      Instead, you (apparently) wrote [cpan://dist/webreaper/]. ++ for a good guess, but it's wrong. The PerlMonks way to link efficiently to a distribution on CPAN is with [dist://webreaper] (⇒ webreaper). This is documented at What shortcuts can I use for linking to other information?

      Moral: Verify your links when you post.

      A word spoken in Mind will reach its own level, in the objective world, by its own weight