A scrape tool I wrote downloads XHTML documents and parses them using XML::LibXML. As it turns out, it is hammering www.w3.org, fetching http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd (the XHTML DTD) and the three DTD it includes for every document I parse.
Can libxml2 be told to cache this? Or better yet, cache the parsed result? Is this what ext_ent_handler catches? Does someone already have a ext_ent_handler written? It seems silly that I have to do any of this.
See also: W3C Systems Team Blog: W3C's Excessive DTD Traffic.
Update: Well, the following answers my third question affirmatively:
use strict; use warnings; use XML::LibXML qw( ); my $xhtml = <<'__EOI__'; <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Test</title> </head> <body>Test</body> </html> __EOI__ my $parser = XML::LibXML->new( ext_ent_handler => sub { use Data::Dumper; local $Data::Dumper::Useqq = 1; print(Dumper(\@_)); return ""; }, ); $parser->parse_string($xhtml);
$VAR1 = [ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd", "-//W3C//DTD XHTML 1.0 Strict//EN" ];
In reply to Caching Entities with XML::LibXML by ikegami
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |