in reply to Re^4: Unicode and text files
in thread Unicode and text files
Well, given that this "NTBackup" tool is running on a little-endian machine (and is not bothering to include a BOM at the beginning of its output), it's probably safe to assume that the file it creates really is UTF-16LE (little-endian), and likewise for any other tools that resemble NTBackup in this respect.
Apart from that, if you get a file that seems to be UTF-16 but does not start with the two-byte sequence that serves as the BOM (\xFF\xFE for LE, or \xFE\xFF for BE), and you really don't have any other evidence about its actual byte order, open the file in ":raw" mode and read byte pairs until you encounter either "\x0A\x00" or "\x00\x0A". That's the line-feed character in utf-16le and utf-16be, respectively. (It turns out that unicode codepoint "\x{0A00}" is undefined/unassigned/unused, so there's no chance for ambiguity.)
In fact, the distribution of null bytes in general is a pretty good cue. The set of usable unicode characters with "\x00" as the low-order byte is relatively quite small, and I wouldn't expect any one file to contain more than one or two distinct characters from that set. If you see several different byte pairs in succession with a null in the same position, the nulls are bound to be the high-order bytes of characters in the ASCII range.
|
|---|