saintmike has asked for the wisdom of the Perl Monks concerning the following question:

When fetching a web page containing UTF8-encoded content from a server that doesn't say that it's sending out UTF8 via a response header, I'm getting this warning with LWP::Useragent on a simple get():
Parsing of undecoded UTF-8 will give garbage 
when decoding entities at .../LWP/Protocol.pm line xx.
The cause seems to be HTML::Parser (or better HTML::HeadParser) which assumes non-UTF8, but sees something that looks like UTF8.

It's a warning, so I guess I could just suppress it, but I wanted to know if you guys had seen this before and have figured out an elegant way to deal with it.

As explained here, parse_head() is necessary to deal with some oddball webservers, otherwise I could turn it off in LWP::UserAgent's constructor.

  • Comment on Parsing of undecoded UTF-8 will give garbage

Replies are listed 'Best First'.
Re: Parsing of undecoded UTF-8 will give garbage
by kettle (Beadle) on Aug 03, 2006 at 02:06 UTC
    i don't know if this willbe of much help but...

    if all you want to do is fetch the document, you could just use a system call to 'wget', thus bypassing the whole LWP (it's not exactly elegant, but it will get you your pages). alternatively you could run your own content test for the character encoding. that would involve building smallish statistical model, and then comparing the content of a suspected UTF-8 page against that of the model. this would be a more elegant model - its probably what mozilla does when it has to guess the character encoding for a page that is unmarked - but it would obviously require some more work on your part.

    or why not check out Nutch's innards?

    sorry if you read all this and found it a waste of time.
      From what I've found out so far, it's a bug in LWP.

      But instead of reverting to other methods, it's usually better to send a bug report to the module author(s) and get it fixed.

      I've sent it to the LWP mailing list, should be on its way!