in reply to web page source?

I just wrote something like this. THe LWP::Simple is great for getting the page. To split out the URL's I used the HTML::TokeParser Which is great for going through a document and grabbing our the URLS. In my code I had to be able to use the Text and the URL This is a chunk of the program to demo. this will just print a list of links from a web site.
#!/usr/bin/perl -w use strict; use HTML::TokeParser; use LWP::Simple; my $page=get ('http:web.site.here/file.html'); unless (defined ($page)) {die "Unable to retrive page:$!\n";} my @links; #multi demantional array to hold links my $cnt = 0; my $p = HTML::TokeParser->new(\$page); while (my $token = $p->get_tag("a")) { my $url = $token->[1]{href} || "-"; my $text = $p->get_trimmed_text("/a"); $links[$cnt][0] = $text; $links[$cnt][1] = $url; $cnt++; } #sample of accessing links array $cnt =0; my $size = @links; while ($cnt < $size) { print "Text:$links[$cnt][0]\tURL:$links[$cnt][1]\n"; $cnt++; } exit();
Ps. I am always open to suggestions. I'm still pretty new

Replies are listed 'Best First'.
Re: Re: web page source?
by Desdinova (Friar) on Feb 23, 2001 at 02:07 UTC
    For the question SimpleLinkExtor is a better solution. My code is acutally from a script that ends up doing some parsing on the text part to extract a date and then sorting the array by date. I chopped it up to be easier to get teh meaning of.