Only ASCII characters (with ord <= 0x7f) are represented in UTF-8 in the same way as in latin1 (as single bytes). By the way, there is a module IO::HTML which can be used to determine encoding of HTML files (seekable :raw streams only).
If you are positive that your web pages consist only of ASCII and valid UTF8, you can use HTML::TokeParser::->new( \ decode "UTF-8", $raw_html ); (or even utf8::decode($html); HTML::TokeParser::->new($html)), but it's going to complain and/or produce mojibake (or at least U+FFFD REPLACEMENT CHARACTERs) if (when?) the crawler encounters latin1/cp1252/koi8/another non-ASCII encoding.
In reply to Re^3: Parsing of undecoded UTF-8 will give garbage when decoding entities
by aitap
in thread Parsing of undecoded UTF-8 will give garbage when decoding entities
by itsscott
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |