in reply to Re: Unicode and text files
in thread Unicode and text files

open FILE "<:encoding(utf16)", $myfile while(<FILE>)

Produces "UTF-16:Unrecognised BOM 4a00 at backup.pl line 68." Line 68 being the while(<FILE>) line. So I'm guessing it's not UTF16.

The file is produced by NTBackup if that helps any.

Replies are listed 'Best First'.
Re^3: Unicode and text files
by Hue-Bond (Priest) on Oct 12, 2006 at 14:02 UTC

    Try utf16le or utf16be

    --
    David Serrano

      WooHoo! utf16le it is!

      Now, is there any way to automagically determine what encoding a file is? Or is that something I'm just going to have to know beforehand?

        is there any way to automagically determine what encoding a file is?

        That's precisely what the BOM ("byte order mark") is for. If, when creating files, you don't specify a byte order, Perl will create a BOM for you (otherwise, the file will be "BOM-less"). Files created that way (without explicit byte order) can be read by using plain :encoding(utf16):

        $ /usr/bin/perl use strict; use warnings; my $c = 'a'; my $fd; open $fd, '>:encoding(utf16le)', 'foo-le' or die "open: $!"; print $fd $c; close $fd; open $fd, '>:encoding(utf16be)', 'foo-be' or die "open: $!"; print $fd $c; close $fd; open $fd, '>:encoding(utf16)', 'foo' or die "open: $!"; print $fd $c; close $fd; __END__ $ xxd foo-le 0000000: 6100 a. $ xxd foo-be 0000000: 0061 .a $ xxd foo 0000000: feff 0061 ...a $ /usr/bin/perl open my $fd, '<:encoding(utf16)', 'foo' or die "open: $!"; print while <$fd>; close $fd; __END__ a

        Update: Of course, I realized after clicking in "Create" that I really didn't answer your actual question :^). Well, if files don't have a BOM, you can only guess or brute-force them. Or add a BOM to them ;^).

        .

        --
        David Serrano

        is there any way to automagically determine what encoding a file is?

        Looks like Encode::Detect might be worth investigating.

        --
        <http://dave.org.uk>

        "The first rule of Perl club is you do not talk about Perl club."
        -- Chip Salzenberg

        Now, is there any way to automagically determine what encoding a file is? Or is that something I'm just going to have to know beforehand?

        Well, given that this "NTBackup" tool is running on a little-endian machine (and is not bothering to include a BOM at the beginning of its output), it's probably safe to assume that the file it creates really is UTF-16LE (little-endian), and likewise for any other tools that resemble NTBackup in this respect.

        Apart from that, if you get a file that seems to be UTF-16 but does not start with the two-byte sequence that serves as the BOM (\xFF\xFE for LE, or \xFE\xFF for BE), and you really don't have any other evidence about its actual byte order, open the file in ":raw" mode and read byte pairs until you encounter either "\x0A\x00" or "\x00\x0A". That's the line-feed character in utf-16le and utf-16be, respectively. (It turns out that unicode codepoint "\x{0A00}" is undefined/unassigned/unused, so there's no chance for ambiguity.)

        In fact, the distribution of null bytes in general is a pretty good cue. The set of usable unicode characters with "\x00" as the low-order byte is relatively quite small, and I wouldn't expect any one file to contain more than one or two distinct characters from that set. If you see several different byte pairs in succession with a null in the same position, the nulls are bound to be the high-order bytes of characters in the ASCII range.