in reply to extracting data from HTML

While I am its maintainer, thus biased, I firmly believe that HTML::HTML5::Parser is the best Perl HTML parser on the block. It's perhaps somewhat slower than HTML::Parser but because it uses the same HTML5 parsing algorithm found in modern web browsers, it should do a better job on tag soup.

Whatsmore, it parses the HTML into an XML::LibXML DOM tree, which I firmly believe is the best XML DOM for Perl (even though it's not pure Perl - it's based on libxml2 which is implemented in C).

I'm also the author of Web::Magic which aims to integrate the two modules mentioned above with LWP::UserAgent and various other things to provide a "do what I mean" solution for interacting with RESTful HTTP resources. Here's an example using Web::Magic...

use 5.010; use Web::Magic; say Web::Magic -> new('http://www.perlmonks.org/', node_id => 974112) -> querySelector('title') -> textContent

And here's an advantage of how you'd do something similar without Web::Magic...

use 5.010; use HTML::HTML5::Parser; my $xml = HTML::HTML5::Parser->load_html( location => 'http://www.perlmonks.org/?node_id=974112' ); my $nodes = $xml->findnodes('//*[local-name()="title"]'); say $nodes->get_node(1)->textContent;
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'

Replies are listed 'Best First'.
Re^2: extracting data from HTML
by Jurassic Monk (Acolyte) on Jun 03, 2012 at 17:05 UTC

    it's alright to be biased

    I do like the idea off being up to date as much as possible, I sometimes have the suspicious feeling that the PERL community can't get up pace with all the changes anyways. There still isn't one single package that does XSLT 2.0 and XPath 2.0 and so on. Partly we rely on libxml2, which is not goin to get an update to the next level.

    I managed to get HTML::TreeBuilder::XPath working and playing around with it at the moment. Getting the right text from the HTML source with XPath is quite a struggle anyways, resulting frequnetly in errors... but... I get the grips and it feels more confident then running regex's on the source, especially since some parts consists of more then one <p>-elements. ->findvalues()does do a nice trick. Only need to get rid off the nasty cp1252 codes that slipped into a iso-8859-1 encoded html, the € symbol isn't part of it

    I do not want to have a war between the monks, but please enlighten me more on why to use HTML5 instead of TreeBuilder

      "There still isn't one single package that does XSLT 2.0"

      There's XML::Saxon::XSLT2 (again, I'm the developer of it). It's a Perl wrapper around the Java Saxon library, using Inline::Java. It's a bit of a pain to install, and the interface between Java and Perl has a potential to be flaky, but right now it's your only option if you need XSLT 2.0 in Perl.

      I'd love to see some competitors to it spring up, I really would. The only reason I wrote it is because there was literally no other choice in Perl for XSLT 2.0; not out of a love for Java programming. ;-)

      "I do not want to have a war between the monks, but please enlighten me more on why to use HTML5 instead of TreeBuilder"

      Two main reasons:

      • If you want to use XML::LibXML, which as I say is a very good DOM implementation (with XPath, XML Schema, Relax NG, etc) then HTML::HTML5::Parser integrates with it out of the box.

      • It follows the parsing algorithm from the W3C HTML5 working drafts, allowing it to deal with tag soup in much the same way as desktop browsers do. (It currently passes the majority of the html5lib test suite. html5lib is an HTML parsing library for Python and Ruby, and is pretty much the de facto reference implementation of the HTML5 parsing algorithm.) If you wish to deal with random content off the Web, that's kinda important, because there are an awful lot more people who test their content in desktop browsers than test it in HTML::TreeBuilder.

        A practical example. Check out the following piece of HTML in a desktop web browser. Note that (somewhat counter-intuitively) the paragraph containing the emphasised text is rendered above the "Hello World" greeting.

        <table> <tr><td>Hello World</td></tr> <p>This will be rendered <em>before</em> the greeting.</p> </table>

        Now run this test script:

        use 5.010; use HTML::TreeBuilder; use HTML::HTML5::Parser; my $string = do { local $/ = <DATA> }; # slurp say "HTML::HTML5::Parser..."; say HTML::HTML5::Parser -> load_html(string => $string) -> textContent; say "HTML::TreeBuilder..."; say HTML::TreeBuilder -> new_from_content($string) -> as_text; __DATA__ <table> <tr><td>Hello World</td></tr> <p>This will be rendered <em>before</em> the greeting.</p> </table>

        Note that HTML::HTML5::Parser returns the content in the same order as your web browser; HTML::TreeBuilder does not.

      That said, there are plenty of good things about HTML::TreeBuilder too; and if neither of the above apply to you, then it's a good option. It's stable, mature and well-understood by many Perl programmers. I don't really have anything bad to say about it.

      perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Re^2: extracting data from HTML
by Jurassic Monk (Acolyte) on Jun 03, 2012 at 23:24 UTC

    Is it only me that has this?

    I tried to get it all working, from the example with my $xml = HTML::HTML5::Parser->load_html... but ofcourse my test website had to come back with an error. my $xml = HTML::HTML5::Parser->load_html... doesn't handle options I figured and had to use $parser->parse_html_file($URL,{'ignore_http_response_code => 1}). However, ofcourse this happens to me... the user_agent was not accepted and returned a HTTP-406 error

    after tweaking around for a few hours, I managed to get it working

    use HTML::HTML5::Parser; my $user_agent; $user_agent = LWP::UserAgent->new; $user_agent("HTML::HTML5::Parser/".'0.110'." "); $user_agent->parse_head(0); my $parser = HTML::HTML5::Parser->new; my $xml = $parser->parse_html_file($URL, { ignore_http_response_code => 1, user_agent => $user_agent, } ); my $nodes = $xml->findnodes('//*[local-name()="title"]'); say $nodes->get_node(1)->textContent;

    I'm proud I did it, but I don't like it to remove some sort of security checks from the LWP::UserAgent, but somehow, it was nescecary for this website



    Question: does it conflict with a HTTP-301 - moved permanently status?

      but ofcourse my test website had to come back with an error

      One tip for developing scrapers: it's both convenient for you and polite to the site you're scraping to save a local copy that you can hammer at all you want without bothering their server. If you're scraping a lot of pages and doing a lot of tweaking on your code, you have the potential of really hammering someone's server. Once your extractor works, then you can put back the Mechanize calls to the site, which are probably not the hard part

      In the example I gave upthread, it would have been ok for me to hammer the site, but I ended up cloning it with wget and running it locally.

      Update: You might also want to see if the site you're scraping has an API that hands you structured data. I recently had to pull down the links for about 140 books from the Apple site, and they have a nice API that lets you search by ISBN. Amazon also tends to have an API for a lot of things. Other sites often do as well if you dig through the fine print at the bottom of the page.

        This whole thing of extracting data from a HTML source is about populating a web-shop. My plan was to harvest as much data as needed and do some 'magick' with it and then use Rpc-XML to update the Magento Database. Guess none of the websites will be friendly in allowing me access to their source, mainly because they do not own their data, they license a web-shop and the data is being provided by another party.

        is this theft ? - don't answer

        not every html-source has the same underlying database and some web-sites do provide additional, meaningfull data, that the 'big-player' does not have. So yes, I will make different scrapers for each and every web-site, and even for different product types.

        The whole process will roughly be something like the following:

        1. Enter a product ID
        2. get html and save a coache
        3. proces data on disk and create source.xml
        4. do something meaningful with the sources
        5. ask user for confirmation or missing information where needed
        6. do something meaningful with the sources and save productinfo.xml

        7. take the product.xml and turn it into a magento.xml with XSLT
        8. feed the magento database

        I did a nice job with dirty programming, but the moment I encountered the cp1252 rubbish in my (suposedly) iso-8859-1 I gave up and wanted to start from scratch again, using proper XML modules, no longer relying on XML::Simple, discoverd XSLT that would help me out with processing the different sources and translating them from one (general) data-model to the magento model

Re^2: extracting data from HTML
by Jurassic Monk (Acolyte) on Jun 03, 2012 at 19:40 UTC

    too bad...

    I had hoped for a bit more exotic result from that HTML::HTML5::Parser. All I got was:

    exctracting data from HTML

    using Data::Dumper ( $xml ); not a nice result either:

    $VAR1 = bless( do{\(my $o = 21921056)}, 'XML::LibXML::Document' );

    time to do some more meditation

      Yes, it returns plain text because the textContent method is documented as:

      this function returns the content of all text nodes in the descendants of the given node as specified in DOM. (perldoc XML::LibXML::Node)

      Data::Dumper won't be much use with XML::LibXML. Nodes are all just numeric pointers to structures at the other side of the XS boundary (i.e. C structures). There is XML::LibXML::Debugging which allows, e.g.

      print Dumper( $xml->toDebuggingHash );
      perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'

        forgive me my brethern but it looks I did bite off more then I could chew and again I endup with bits I can not put together

        the example of
        my $nodes = $xml->findnodes('//*[local-name()="title"]')
        wasn't to much to understand, altohug quite surprised with the construction of the xpath-expression; I would had expected something more easy like
        my $nodes = $xml->findnodes('//html/head/title]')
        But ofcourse, it wouldn't be me if I would get it wrong again

        With HTML::TreeBuilder::XPath it did work, even things like giving me all the table rows from a specific path and dump as text with

        my @stuff = $tree->findvalues( '//td[@class="BTrow"]/table/tr/td/table/tr'); print Dumper(\@stuff);

        Trying that with HTML::HTML5::Parser only resulted in undefined results

        It looks to me I'm missing some bit

        please toby, and others as well, what am I doing wrong here, it can't be the xpath syntax, is it?

        thank you all for your enlighting words and inspiration

      When you're dealing with XML::LibXML, you'll need to wade through XML::LibXML::Node, from which most of the other classes inherit. Most of them have a ->toString method if you're interested in their contents.

        I was working with my own test stuff and then that didn't work. So I used Toby his example, which I asumed would work fine, but that did not, unfortunatly

      "exctracting data from HTML"

      Oh how insanely stupid! ARRRGGGHHHH!!!!!#$#@@#$%&^%

      All the time I was thinking it was a 'processing indicator' that something was being extracted by the HTML5 routine. ARRRRGGGHHHH!!!!

      /me wonders... do monks curse

      "exctracting data from HTML" is the title of that web page indeed, just as it was supposed to

      now the next things to work on.... tomorrow