i don't know if this willbe of much help but...
if all you want to do is fetch the document, you could just use a system call to 'wget', thus bypassing the whole LWP (it's not exactly elegant, but it will get you your pages). alternatively you could run your own content test for the character encoding. that would involve building smallish statistical model, and then comparing the content of a suspected UTF-8 page against that of the model. this would be a more elegant model - its probably what mozilla does when it has to guess the character encoding for a page that is unmarked - but it would obviously require some more work on your part.
or why not check out
Nutch's
innards?
sorry if you read all this and found it a waste of time.