bliako has asked for the wisdom of the Perl Monks concerning the following question:

Esteemed Monks,

I am searching for unicode-filenames on disk with File::Find::Rule in this way:

use utf8; use Test::More; use Test2::Plugin::UTF8; use Devel::Peek; use Encode; use File::Find::Rule; # the file's 1st letter is greek kappa, # 2nd letter is iota accented (with tonos) my $expected = 'test/κί.png'; my @gots = map { Encode::decode_utf8($_) } File::Find::Rule->file()->name(qr/.*\.png/)->in('test') ; is(scalar(@gots), 1, "got 1 item"); is($gots[0], $expected, "name is as expected"); Dump($expected); Dump($gots[0]); done_testing();

Dir 'test' contains one file: κί.png (κί.png, kappa, iota accented dot png).

In Linux, the above succeeds and prints the filenames as:

FLAGS = (POK,IsCOW,pPOK,UTF8) PV = 0x561260048d00 "test/\xCE\xBA\xCE\xAF.png"\0 [UTF8 "test/\x{3ba +}\x{3af}.png"] CUR = 13 LEN = 15 FLAGS = (POK,IsCOW,pPOK,UTF8) PV = 0x561260249060 "test/\xCE\xBA\xCE\xAF.png"\0 [UTF8 "test/\x{3ba +}\x{3af}.png"] CUR = 13 LEN = 15

In OSX (10.13 high sierra whatever), it fails and it prints:

FLAGS = (POK,IsCOW,pPOK,UTF8) PV = 0x7fd35e7d8b60 "test/\316\272\316\257.png"\0 [UTF8 "test/\x{3ba +}\x{3af}.png"] CUR = 13 LEN = 15 FLAGS = (POK,IsCOW,pPOK,UTF8) PV = 0x7fd35e7ee4e0 "test/\316\272\316\271\314\201.png"\0 [UTF8 "tes +t/\x{3ba}\x{3b9}\x{301}.png"] CUR = 15 LEN = 17

The difference is that in Linux the "greek iota with tonos (accented)" is just 1 character: U+03AF (hex \x{3af}). And this comes out both when I typed it in the script (the $expected) and when File::Find::Rule found the corresponding file on disk.

In OSX however, the expected value is just as in linux. But File::Find::Rule finds the file on disk and its name is now : kappa, iota and *separate* accent (\x{3ba}\x{3b9}\x{301}). It looks the same but test fails.

Additional info:

The raw output of File::Find::Rule is:

OSX: PV = 0x7f8724885690 "test/\316\272\316\271\314\201.png"\0 <<<< se +parate accent Linux: PV = 0x5566b6bc05d0 "test/\xCE\xBA\xCE\xAF.png"\0

So, it seems the culprit is File::Find::Rule who adds the separate accent.

The program works find I guess because the OS treats these filenames with accented unicode chars correctly. What's bothering me is that my tests fail.

Am I doing things wrong? I need some advice.

bw, bliako

5min Edit: changed the title

  • Comment on File::Find::Rule returns different filenames if they have chars with accents: OSX vs Linux
  • Select or Download Code

Replies are listed 'Best First'.
Re: File::Find::Rule returns different filenames if they have chars with accents: OSX vs Linux
by Corion (Patriarch) on Jan 02, 2024 at 17:25 UTC

    Most likely, you want to normalize the representation of the Unicode strings. The likely module is Unicode::Normalize, and for the comparison it doesn't matter which form you use, but they should match.

      perfect! thanks Corion! I learn a new thing every year.

Re: File::Find::Rule returns different filenames if they have chars with accents: OSX vs Linux
by kikuchiyo (Hermit) on Jan 02, 2024 at 17:59 UTC

    It's not necessarily File::Find::Rule that arbitrarily changes the representation of the file name.

    I recall that a few years ago I had to test the compatibility of our application at $work with various operating systems, and there were differences related to unicode normalization: browsers on OSX tended to return accented characters typed into a password field in their decomposed form (basic letter + combining accent), while those on Windows and Linux returned the composed form (accented letter). Perhaps this reflects a widespread custom on these operating systems, or a feature in an underlying library.

    You could try ls | xxd to check the actual representation of your file names in the file system. On my linux box most files (that have accented characters in their names) are in the composed form, but I've found a few that aren't.

      thanks, I have just discovered that now when I decided to test how it fares on a unicoded dir, taking the tarball distribution from OSX (with said dir) to Linux broke the tests while the dirs seemed the same, albeit normalised and unormalised. I will experiment with making the dirs and files during testing with normalised form and see if that is respected by the two OS'es. And I haven't touched windows yet :(( yikes.

        A fine mess indeed.

        I've experimented as well, and on linux it's possible to have two (or more) different files whose names appear to be the same because of different unicode normalization.

        perl -e 'for (["a\x{0301}", "decomposed"], ["\xc3\xa1", "composed"]) {open my $F, ">", $_->[0]; print $F $_->[1]; close $F}'

        After running this:

        $ ls -rw-r--r--. 1 user user 8 Jan 3 15:31 á -rw-r--r--. 1 user user 10 Jan 3 15:31 a&#769;
        (in the terminal these appear identical)

        If I turn on my native keyboard layout and type "less á", I get the composed file. But I can copy and paste the decomposed string into the terminal, and access the other file as well.

        Apparently, this filesystem, and linux filesystems in general, don't assume and enforce much about file name encoding: file names are just a sequence of bytes, and the user can keep the pieces.

        My gut feeling is that Apple's way, that is, normalization, is "better" from an usability standpoint - but then it had better be completely consistent and enforced everywhere.