use WWW::Mechanize; use HTML::TokeParser; my $webcrawler = WWW::Mechanize->new(); $webcrawler->get("http://www.google.com"); my $content = $webcrawler->content; my $parser = HTML::TokeParser->new(\$content); while($parser->get_tag){ print $parser->get_trimmed_text(),"\n"; }
It gives me the output i want, stripping almost all the HTML from a web page and leaving the contents.Now, if i don't have the webpage specified here and i try to get it with the <STDIN> command when i try to compile and run the program with the perl command then the script gets stuck. It won't even let me type a URL in.
$url_name = <STDIN>; # The user inputs the URL to be searched $webcrawler->get($url_name);
The 3rd version is when the while statement starts i want to save the $parser variable instead of printing it. I tried to save it as an element of an array and a variable on its own.So at the it would change to:
But this gets stuck again for some reason and looking at the proccesses it takes a lot of my CPU power so it probably gets stuck somewhere although i dont know where.This is the same if i initialize an array and use the first element of that array to write the $parser in. Any ideas?Thanks in advancewhile($parser->get_tag){ my $stripped_html = $parser->get_trimmed_text(),"\n"; #print $parser->get_trimmed_text(),"\n"; print $stripped_html; }
In reply to Strange way a string or array works by lampros21_7
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |