Re: extracting data from HTML
by Corion (Patriarch) on Jun 03, 2012 at 12:02 UTC
|
| [reply] |
|
|
Okay, that seems to work HTML::TreeBuilder seems be more forgiving
however $tree->dump gives a lot of information, luckely _as_XML_intended looks more readable again
Now the next part... extracting the right pieces of information with XPath
some pieces will be quite easy, for example the title. Others will be from traversing a <TABLE>:
in the left colum there is a data description, like 'Author', in the right column the name, like 'Wall, L.' (sometimes inside the <a HREF=...>Author Name</a> which makes it a bit more complicated, for I only want the text)
my guess is to look for a text element in a <td> tag etc, that equals "Author" and then do something with the next sibling?
| [reply] [d/l] [select] |
|
|
All your scraping will always be specific to the page(s) you're scraping. Personally, I like to use CSS selectors, as they give results more quickly than fighting with XPath. Whenever CSS is not enough, I fall back to looking at the XPath expressions Firebug suggests me for the elements, and work from these.
| [reply] |
|
|
reading up on HTML::Selector::Xpath, I try to understand from it that it's sole purpose is to translate from CSS to XPath expression. Correct me if I'm wrong.
However, it doesn't seem to be capable to do what is needed to solve the problem mentioned in Re^5: extracting data from HTML where the parser seemed to have provided each and everynode with a default namespace
Wouldn't it be great to have HTML::Selector::Xpath have the possibillity to have each and every element to include a user definable 'default' namespace prefix? - but only those elements that do not have a namespace by themselves ofcourse
If you aske me, it can't be too difficult to implement that, is it?
| [reply] |
|
|
If I understand you right, the (undocumented) "prefix" option already does that.
| [reply] |
|
|
Re: extracting data from HTML
by tobyink (Canon) on Jun 03, 2012 at 15:22 UTC
|
While I am its maintainer, thus biased, I firmly believe that HTML::HTML5::Parser is the best Perl HTML parser on the block. It's perhaps somewhat slower than HTML::Parser but because it uses the same HTML5 parsing algorithm found in modern web browsers, it should do a better job on tag soup.
Whatsmore, it parses the HTML into an XML::LibXML DOM tree, which I firmly believe is the best XML DOM for Perl (even though it's not pure Perl - it's based on libxml2 which is implemented in C).
I'm also the author of Web::Magic which aims to integrate the two modules mentioned above with LWP::UserAgent and various other things to provide a "do what I mean" solution for interacting with RESTful HTTP resources. Here's an example using Web::Magic...
use 5.010;
use Web::Magic;
say Web::Magic
-> new('http://www.perlmonks.org/', node_id => 974112)
-> querySelector('title')
-> textContent
And here's an advantage of how you'd do something similar without Web::Magic...
use 5.010;
use HTML::HTML5::Parser;
my $xml = HTML::HTML5::Parser->load_html(
location => 'http://www.perlmonks.org/?node_id=974112'
);
my $nodes = $xml->findnodes('//*[local-name()="title"]');
say $nodes->get_node(1)->textContent;
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
| [reply] [d/l] [select] |
|
|
it's alright to be biased
I do like the idea off being up to date as much as possible, I sometimes have the suspicious feeling that the PERL community can't get up pace with all the changes anyways. There still isn't one single package that does XSLT 2.0 and XPath 2.0 and so on. Partly we rely on libxml2, which is not goin to get an update to the next level.
I managed to get HTML::TreeBuilder::XPath working and playing around with it at the moment. Getting the right text from the HTML source with XPath is quite a struggle anyways, resulting frequnetly in errors... but... I get the grips and it feels more confident then running regex's on the source, especially since some parts consists of more then one <p>-elements. ->findvalues()does do a nice trick. Only need to get rid off the nasty cp1252 codes that slipped into a iso-8859-1 encoded html, the € symbol isn't part of it
I do not want to have a war between the monks, but please enlighten me more on why to use HTML5 instead of TreeBuilder
| [reply] [d/l] [select] |
|
|
"There still isn't one single package that does XSLT 2.0"
There's XML::Saxon::XSLT2 (again, I'm the developer of it). It's a Perl wrapper around the Java Saxon library, using Inline::Java. It's a bit of a pain to install, and the interface between Java and Perl has a potential to be flaky, but right now it's your only option if you need XSLT 2.0 in Perl.
I'd love to see some competitors to it spring up, I really would. The only reason I wrote it is because there was literally no other choice in Perl for XSLT 2.0; not out of a love for Java programming. ;-)
"I do not want to have a war between the monks, but please enlighten me more on why to use HTML5 instead of TreeBuilder"
Two main reasons:
If you want to use XML::LibXML, which as I say is a very good DOM implementation (with XPath, XML Schema, Relax NG, etc) then HTML::HTML5::Parser integrates with it out of the box.
It follows the parsing algorithm from the W3C HTML5 working drafts, allowing it to deal with tag soup in much the same way as desktop browsers do. (It currently passes the majority of the html5lib test suite. html5lib is an HTML parsing library for Python and Ruby, and is pretty much the de facto reference implementation of the HTML5 parsing algorithm.) If you wish to deal with random content off the Web, that's kinda important, because there are an awful lot more people who test their content in desktop browsers than test it in HTML::TreeBuilder.
A practical example. Check out the following piece of HTML in a desktop web browser. Note that (somewhat counter-intuitively) the paragraph containing the emphasised text is rendered above the "Hello World" greeting.
<table>
<tr><td>Hello World</td></tr>
<p>This will be rendered <em>before</em> the greeting.</p>
</table>
Now run this test script:
use 5.010;
use HTML::TreeBuilder;
use HTML::HTML5::Parser;
my $string = do { local $/ = <DATA> }; # slurp
say "HTML::HTML5::Parser...";
say HTML::HTML5::Parser
-> load_html(string => $string)
-> textContent;
say "HTML::TreeBuilder...";
say HTML::TreeBuilder
-> new_from_content($string)
-> as_text;
__DATA__
<table>
<tr><td>Hello World</td></tr>
<p>This will be rendered <em>before</em> the greeting.</p>
</table>
Note that HTML::HTML5::Parser returns the content in the same order as your web browser; HTML::TreeBuilder does not.
That said, there are plenty of good things about HTML::TreeBuilder too; and if neither of the above apply to you, then it's a good option. It's stable, mature and well-understood by many Perl programmers. I don't really have anything bad to say about it.
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
| [reply] [d/l] [select] |
|
|
Is it only me that has this?
I tried to get it all working, from the example with my $xml = HTML::HTML5::Parser->load_html... but ofcourse my test website had to come back with an error. my $xml = HTML::HTML5::Parser->load_html... doesn't handle options I figured and had to use $parser->parse_html_file($URL,{'ignore_http_response_code => 1}). However, ofcourse this happens to me... the user_agent was not accepted and returned a HTTP-406 error
after tweaking around for a few hours, I managed to get it working
use HTML::HTML5::Parser;
my $user_agent;
$user_agent = LWP::UserAgent->new;
$user_agent("HTML::HTML5::Parser/".'0.110'." ");
$user_agent->parse_head(0);
my $parser = HTML::HTML5::Parser->new;
my $xml = $parser->parse_html_file($URL, {
ignore_http_response_code => 1,
user_agent => $user_agent,
}
);
my $nodes = $xml->findnodes('//*[local-name()="title"]');
say $nodes->get_node(1)->textContent;
I'm proud I did it, but I don't like it to remove some sort of security checks from the LWP::UserAgent, but somehow, it was nescecary for this website
Question: does it conflict with a HTTP-301 - moved permanently status?
| [reply] [d/l] [select] |
|
|
but ofcourse my test website had to come back with an error
One tip for developing scrapers: it's both convenient for you and polite to the site you're scraping to save a local copy that you can hammer at all you want without bothering their server. If you're scraping a lot of pages and doing a lot of tweaking on your code, you have the potential of really hammering someone's server. Once your extractor works, then you can put back the Mechanize calls to the site, which are probably not the hard part
In the example I gave upthread, it would have been ok for me to hammer the site, but I ended up cloning it with wget and running it locally.
Update: You might also want to see if the site you're scraping has an API that hands you structured data. I recently had to pull down the links for about 140 books from the Apple site, and they have a nice API that lets you search by ISBN. Amazon also tends to have an API for a lot of things. Other sites often do as well if you dig through the fine print at the bottom of the page.
| [reply] |
|
|
|
|
too bad...
I had hoped for a bit more exotic result from that HTML::HTML5::Parser. All I got was:
exctracting data from HTML
using Data::Dumper ( $xml ); not a nice result either:
$VAR1 = bless( do{\(my $o = 21921056)}, 'XML::LibXML::Document' );
time to do some more meditation
| [reply] [d/l] [select] |
|
|
Yes, it returns plain text because the textContent method is documented as:
this function returns the content of all text nodes in the descendants of the given node as specified in DOM. (perldoc XML::LibXML::Node)
Data::Dumper won't be much use with XML::LibXML. Nodes are all just numeric pointers to structures at the other side of the XS boundary (i.e. C structures). There is XML::LibXML::Debugging which allows, e.g.
print Dumper( $xml->toDebuggingHash );
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
| [reply] [d/l] [select] |
|
|
|
|
|
|
|
When you're dealing with XML::LibXML, you'll need to wade through XML::LibXML::Node, from which most of the other classes inherit. Most of them have a ->toString method if you're interested in their contents.
| [reply] [d/l] |
|
|
|
|
"exctracting data from HTML"
Oh how insanely stupid! ARRRGGGHHHH!!!!!#$#@@#$%&^%
All the time I was thinking it was a 'processing indicator' that something was being extracted by the HTML5 routine. ARRRRGGGHHHH!!!!
/me wonders... do monks curse
"exctracting data from HTML" is the title of that web page indeed, just as it was supposed to
now the next things to work on.... tomorrow
| [reply] |
Re: extracting data from HTML
by zwon (Abbot) on Jun 03, 2012 at 11:52 UTC
|
| [reply] |
|
|
# sighs
I have looked into so many modules already... and not one off the modules gave me a workable solution for something so obviuos
can't it be simple, like:
my $BlahBlahParser = XML::BlahBlah->new();
my $XMLobj = $BlahBlahParser->load_html("http://www.perlmonks.org/");
and then use any ordinary XPath to query my document or extract some paragraphs of text? | [reply] [d/l] |
|
|
| [reply] |
|
|
Re: extracting data from HTML
by ww (Archbishop) on Jun 03, 2012 at 12:03 UTC
|
- open it? open
- retrieve? WWW::Mechanize or LWP::UserAgent or threads here that satisfy search-terms like "fetch" and "html" (or you could open your source in a browser and 'save as')
- get nice XML? You may want to understand XML before raising this question... See w3.org/TR/rec-xml if you don't have a pretty good handle on what "eXtensible markup language" is... and search out xml parser here, if you do. See also nodes here satisfying a searchterm like "parse."
| [reply] [d/l] [select] |
|
|
Hi,
the point is to get it into something I can handle with XPath and do some 'foreach' if needed
and yes, I've read most of O'Reilly:
- XML
- Perl & XML
- XML Schema
- XSLT
- XSLT cookbook
And that is the reason I trun to the monnestry, for the answers are not to be found in those scrolls
| [reply] |
|
|
Don't look for a particular general module that will solve all your HTML to Data problems. Look at the page or pages that you want to extract data from, and figure out what are the best modules for those particular cases. In my experience (which is less than most others here), it's not worth the trouble to find something that will go straight from HTML to appropriately structured XML. Whoever generated the page had some database model and spewed it into some template that they invented, probably with no thought whatsoever in making it easy to turn it back into data. Or they didn't even do things in a consistent way, making your problem in inverting it even worse.
If you have access to a lot of O'Reilly stuff, don't look at the general books. Look at a practical one--I started HTML scraping with recipes out of Spidering Hacks and still refer back to it occasionally.
Here's a recent example (after the more tag) where I had a bunch of pages on a website that I wanted to copy book metadata from all the pages and put it into XML so I could generate a catalog from the XML. The catch is that the pages were all hand coded. They did a pretty good job using CSS to identify the relevant parts, but there were still inconsistencies, and a few of the older pages were so out of whack that they didn't get processed at all.
If you look at the code, it's pretty specific to the pages I was scraping, so it's ugly in all sorts of ways. It could also be made somewhat simpler if I needed to do it a bunch more times-- it's a bit repetitive in pulling out a bunch of the labeled items, so those could be a loop through an array of names, and maybe add flags to the array for special treatment. There are also extraneous modules called-- the original pages were inconsistent about odd characters and entities, and that was one of the bigger headaches. Note how I find the pieces I want-I know how they're named, so I just do a "look down" to find them, and then process contents from there. Note also that I use XML::Writer to generate the XML, rather than trying to do it myself.
| [reply] [d/l] |
|
|
nice XML...
I probably should have said: "well-formed", not even bothering to have it "valid XML", for most of the websites don't produce X-HTML and therfor making it trouble to just read in the source and return the XML-object, hence my question
| [reply] |
Re: extracting data from HTML
by locked_user sundialsvc4 (Abbot) on Jun 04, 2012 at 14:51 UTC
|
In my view, you have only a very-few options, and all of them depend upon the original data source:
-
If at all possible, change the source. If you are drawing data from a web-page owned by someone you are friendly to (i.e. they will not view your actions as “scraping their databases,”), then negotiate with them for a better feed. Maybe they have a SOAP interface; maybe they can build one.
</l>
-
If the HTML has a consistent structure, then you can parse it meaningfully. But the structure has to be very meaningful.
-
If not, you have to use regular-expressions to recognize the “wheat” within the “chaff” of data. I have personally used that approach with Parse::RecDescent to extract data from thousands of SAS files, Korn shell scripts and Tivoli Workload Scheduler schedules. You must identify the “wheat” that contains data as well as enough of the “wheat” to establish context, then build a “forgiving” grammar. It wasn’t easy.
| |