in reply to Re: HTML stripper in WWW::Mechanize doesn't seem to work
in thread HTML stripper in WWW::Mechanize doesn't seem to work

or OP could use HTML::TreeBuilder as shown in my reply (Re^3: Syntax error for WWW::Mechanize) to OP's first post Syntax error for WWW::Mechanize

:-)

Perl is Huffman encoded by design.

Replies are listed 'Best First'.
Re^3: HTML stripper in WWW::Mechanize doesn't seem to work
by lampros21_7 (Scribe) on Aug 01, 2005 at 10:43 UTC
    Right, i have made the neccessary changes and i think the code works fine now. The problem is i don't quite think the content( format => "text" ); function in the WWW::Mechanize http://search.cpan.org/dist/WWW-Mechanize/lib/WWW/Mechanize.pm module works. I have used it with google and perlmonks.com and it gives me the whole content. Does anyone else have the same problem or is it something with my code?

    Updated code:

    #!/usr/bin/perl use strict; #Module used to go through the web pages, Can extract links, save them + and also strip the HTML of its contents use WWW::Mechanize; use URI; print "WEB CRAWLER AND HTML EXTRACTOR \n"; print "Please input the URL of the site to be searched \n"; print "Please use a full URL (eg. http://www.dcs.shef.ac.uk/) \n"; #Create an instance of the webcrawler my $webcrawler = WWW::Mechanize->new(); my $url_name = <STDIN>; # The user inputs the URL to be searched my $uri = URI->new($url_name); # Process the URL and make it a URI #Grab the contents of the URL given by the user $webcrawler->get($uri); # Put the links that exist in the HTML of the URL given by the user i +n an array my @website_links = $webcrawler->links($uri); # The HTML is stripped off the contents and the text is stored in an +array of strings my $x = 0; my @stripped_html; $stripped_html[$x] = join ' ', $webcrawler->content( format => "text" + ); print $stripped_html[$x]; $x = $x + 1; exit;

    Thanks