CountZero has asked for the wisdom of the Perl Monks concerning the following question:

Dear Sisters and Brothers in Perl!

I'm trying to download some webpages containing non-ASCII characters. The web-page declares it to be

<meta http-equiv="Content-Type" content="text/html; charset=windows-12 +51" />
and in my browser (FireFox 3) it shows very nice.

Now I have written a little script to get me one page as follows:

use strict; use LWP::UserAgent; my @captions; my $response = $ua->get('http://www.1418.ru/chronicles.php?p=100'); if ($response->is_success) { my $file = $response->content; $file =~ m/<h3>(.*)<\/h3>/i; my $h3_content = $1; push @captions, $h3_content; } else { warn 'ERROR: no HTML ',$response->status_line; }
Before one of you starts saying that I should not parse HTML with a regex: I know and besides there is only one <h3>-tag on the page, so it seemed a bit overkill to break out HTML::Parser or HTML::TreeBuilder

After I have downloaded all pages I need, I save @captions into a file and when I open the file (which is actually an HTML-file with the proper charset declaration) it does no longer show Cyrillic charcaters, but funny accented characters.

So I thought that I needed to use a Cyrillic encoding as follows:

open my $fh, '>:encoding(iso-8859-5)', 'c:/data/captions2.txt'; print $fh join '\n', @captions;
But that gives me a lot of errors such as:
"\x{00ff}" does not map to iso-8859-5.
(iso-8859-5 is the iso name for windows-1251) and the file is full of these "\x{00ff}" character encodings and still does not render correctly.

So I guess that things already go wrong when importing the web-page and that somewhere there the proper encoding gets lost and cannot be restored.

I think my question really boils down to "how to convince LWP::UserAgent to keep the Cyrillic endcoding of the webpage?"

Update: LWP::UserAgent indeed did not decode the webpage and all was solved when it put the proper encoding (windows-1251) in the HTML header. Thanks all!

CountZero

A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

Replies are listed 'Best First'.
Re: Downloading webpages with non-ASCII characters
by ikegami (Patriarch) on Aug 27, 2008 at 06:30 UTC

    LWP doesn't decode anything unless you use ->decoded_content. If you use ->content, you get the raw bytes returned by the web server. By using '>:encoding(iso-8859-5)', you are re-encoding chars that have already been encoded using windows-1251. That makes no sense. You need to undo the first encoding before encoding again.

    use Encode qw( decode from_to ); # Outputs windows-1251 text open my $fh, '>', $qfn; print $fh $response->content; # Outputs iso-8859-5 text open my $fh, '>', $qfn; $content = $response->content; from_to($content, 'windows-1251', 'iso-8859-5'); print $fh $content; # Outputs iso-8859-5 text open my $fh, '>:encoding(iso-8859-5)', $qfn; print $fh decode('windows-1251', $response->content); # Outputs iso-8859-5 text, assuming # the content encoding is detected. open my $fh, '>:encoding(iso-8859-5)', $qfn; print $fh $response->decoded_content;

    So,

    my $file = $response->content;
    should be one of
    my $file = $response->decoded_content;
    or
    my $file = decode('windows-1251', $response->content);

    (iso-8859-5 is the iso name for windows-1251)

    No. They're quite different.
    windows-1251
    iso-8859-5

Re: Downloading webpages with non-ASCII characters
by moritz (Cardinal) on Aug 27, 2008 at 06:09 UTC
    Have you tried the decoded_content option of HTTP::Response?

    I don't know if it actually works in this case because the charset isn't advertised in the HTTP header, but I think it's worth a try.

    Update more likely your problem is this:

    (iso-8859-5 is the iso name for windows-1251)

    The Wikipedia seems to disagree:

    Windows-1251 and KOI8-R (or its Ukrainian variant KOI8-U) are much more commonly used than ISO 8859-5, which never really caught on.

    And indeed Encode knows of the encoding cp-1251 - maybe use that instead?

Re: Downloading webpages with non-ASCII characters
by Gangabass (Vicar) on Aug 27, 2008 at 06:07 UTC

    LWP::UserAgent did't do anything with encoding...

    In my script

    use strict; use LWP::UserAgent; my @captions; my $ua = LWP::UserAgent->new(); my $response = $ua->get('http://www.1418.ru/chronicles.php?p=100'); if ($response->is_success) { my $file = $response->content; $file =~ m/<h3>(.*)<\/h3>/i; my $h3_content = $1; push @captions, $h3_content; } else { warn 'ERROR: no HTML ',$response->status_line; } open TEST, ">", "test.txt" or die $!; print TEST $captions[0]; close TEST;

    I get txt file with normal cp1251 text.

    So the question is how do you open resulting file?

Re: Downloading webpages with non-ASCII characters
by Anonymous Monk on Aug 27, 2008 at 06:12 UTC
    I think my question really boils down to "how to convince LWP::UserAgent to keep the Cyrillic endcoding of the webpage?"
    Nope, here's why :)
    D:\>wget "http://www.1418.ru/chronicles.php?p=100" -O 1418.ru.p-100.wg +et.html --23:15:17-- http://www.1418.ru/chronicles.php?p=100 => `1418.ru.p-100.wget.html' Resolving www.1418.ru... 83.220.35.214 Connecting to www.1418.ru|83.220.35.214|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] [ <=> ] 9,294 38.75K/s 23:15:18 (38.75 KB/s) - `1418.ru.p-100.wget.html' saved [9294] D:\>lwp-request -m get "http://www.1418.ru/chronicles.php?p=100" > 141 +8.ru.p-100.lwp-request.html D:\>md5sum 1418.ru.p-100.lwp-request.html 1418.ru.p-100.wget.html 221e06fca05e17a1f4ae6382acbb39b9 *1418.ru.p-100.lwp-request.html 221e06fca05e17a1f4ae6382acbb39b9 *1418.ru.p-100.wget.html D:\>