The docs do a great job in explaining what happens as follows:
The -T and -B switches work as follows. The first block or so of the file is examined for odd characters such as strange control codes or characters with the high bit set. If too many strange characters (>30%) are found, it's a -B file, otherwise it's a -T file. Also, any file containing null in the first block is considered a binary file.
| [reply] |
The docs are indeed vague about the particular "heuristic" tests that are used, but here's an interesting observation:
If a bona-fide text file happens to contain mostly non-ASCII utf8 characters, -B (in perl 5.8.x) will correctly return "false" (not binary), even though nearly all the bytes in the file have their eighth bit set.
But if the same text data are stored in a file using a non-unicode character set (e.g. iso-8859-5, -6, -7 for Cyrillic, Arabic, Greek, respectively), -B returns "true". Most of the bytes have their 8th bit set, but they aren't parsable as utf8.
(I only had Arabic text on hand for the test, and used just 5.8.1 on macosx, but I trust that the result does extrapolate to other languages, later versions, and different OS's.) | [reply] |
It is just a guess, but most likely a guess that's good enough.
It is a guess, because Perl only looks at a portion of the file, unless you intentionally disguise it, otherwise, a binary file will quickly reveal itself by showing all kind of stange characters.
However there is this chance that there happened to be no stange character in the chunk Perl verified, but there are some elsewhere. In which case, Perl will report binary as ascii mistakenly.
| [reply] |