that is pretty awkward!
strangely enough it seems that the other modules do not bother about the namespaces.
The reasoning behind the namespaces and that those thus need to be included in the XPath is clear. Parsing the badly formed HTML into a XML-structure should ofcourse lead to proper XHTML, I do see that. Hence nsURI does use http://www.w3.org/1999/xhtml.
I made a dump of the 'title' node:
'nsuri' => 'http://www.w3.org/1999/xhtml', 'suffix' => 'title', 'qname' => 'title', 'children' => [ 'extracting data from HTML' ], 'type' => 'Element', 'attributes' => [], 'prefix' => undef
As the webpage of the referenced perlmonks node is just plain html with <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">, I will asume, somewhere the http://www.w3.org/1999/xhtml must have slipped in. It would be nice to override that - or, it would be nice to just remove it from each and every node in the XML-Document.
Maybe there is a way to bypass those default namespaces, or do some quirks on the XPath expression, before applying it to the document
All in all... yes I do desire to be fully knowlageable off all ins and outs that come with XML, XML-Schema, XPath, XSLT and more...
But I don't apreciate this awkward namespace trouble
Brethern monks, is there a way to get around this, other than inseting my own prefix?
N.B. almost every example off XPath expressions, this is overlooked and simplyfied, which is a nasty bummer when you want to work with it
In reply to Re^6: extracting data from HTML
by Jurassic Monk
in thread extracting data from HTML
by Jurassic Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |