in reply to Parsing Links from .php
It is irrelevant whether or not is PHP. PHP generates HTML and it is the HTML you need to parse. Using Apache's rewrite rules I could generate HTML using perl but make them look like PHP generated output. A certain sort of webmaster might even want to do that to confuse hackers.
WWW::Mechanize may be overkill for your needs and LWP, which the former is built upon, should be adequate. (I think where WWW::Mechanize comes in is where you need to login to a website to see content etc.)
In an ideal world well actually some people think in an ideal world, the web would use YAML or something rather than HTML, but that is a different story...... In an ideal world, all HTML would be valid XHTML. Then you could use XML::LibXML to parse the page and off you go. In practice this is highly unlikely to be the case. You are better off using HTML::TreeBuilder to get you going. Grabbing some code from something similar (but not reuseable) you probably want something like:
require LWP::UserAgent; require HTML::TreeBuilder; my $ua = LWP::UserAgent->new(......); my $response = $ua->get(......); if ($response->is_success) { # get the document from the web my $r = $response->decoded_content; # or whatever my $tidied_doc = HTML::TreeBuilder->new_from_content($r)->as_HTML( +); .................. } else { die $response->status_line; }
The other problem is that if the webpage has any sort of international content it is quite likely to declare itself as being encoded in Latin-1, but to contain a mixture of Latin-1 and unicode characters.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Parsing Links from .php
by jdetloff (Acolyte) on Jan 11, 2010 at 02:09 UTC | |
by SilasTheMonk (Chaplain) on Jan 11, 2010 at 15:37 UTC | |
by planetscape (Chancellor) on Jan 12, 2010 at 03:32 UTC |