in reply to Re: Re: Re: Re: Re: Re: regex for utf-8
in thread regex for utf-8

Must be a new Notepad. The Notepad I know from Windows NT only handles the current ANSI code page or "Unicode" which is saved as UTF-16LE w/BOM and CRLF's for line endings.

UTF-16 is the way Windows functions that take "Unicode" like it. Well, almost... UCS-2 is 16 bits per code point, period. Full UTF-16 uses a group of 2048 special code points in pairs to represent values over 64K.

LE is "little endian". In my experience, Notepad doesn't work any other way.

BOM is the "Byte order mark", or "zero-width non-breaking joiner" which is basically a no-op character. It's code is U+FEFF, and there is no character FFFE. So read the first two bytes of the file, and you can tell whether it's LE or Big Endian.

That character also has a particular encoding in UTF-8, if you care to figure it out. That can be used as a signature to identifiy UTF-8 files, too.

Check out Unipad. It can save and load any format or variety. Playing with it might be enlightening.

Also checkout the Unicode.org site.

—John

  • Comment on Re: Re: Re: Re: Re: Re: Re: regex for utf-8

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: Re: Re: Re: regex for utf-8
by jjohhn (Scribe) on Mar 01, 2003 at 02:10 UTC
    Thanks, John. This whole thing (dealing with non-ascii characters in a file) started as a little detail and is growing to consume me. I know a whole lot more about utf-8 and codepages than I knew two days ago, but I still can't search and count the non-unicode characters in the file. I'm trying to get a general solution, but high-ups are satisfied with "Pour it through MS Access and it will come out converted". We have an international product distributed as flat tab-delimited text files, and I don't think that the MS Access pouring approach will work for everybody unless they are only using windows ansi codepage.
      What's a "non-unicode character" in a file?

      Perl has modules for extensive manipulation in this area, and Perl reads UTF-8 nativly.

        That should be "non-ascii". My question is focusing down to the matching part - I guess I can find the end of character because I'll know how many bytes it has in total from the high bits on the first byte, but I don't know if the "codepoint" includes the high bits or not. I need to find these characters, but also record what they are. My buddy did something similar in java because java could read the file in character by character, and he looked for characters >128. But he just printed the whole line with the offending character, and I want to count the characters. I havn't looked at java faor about a year, but it may be worth swimming through public static void main to get to the solution. My deadline is coming up. Modules: I was hoping to learn how to do this myself, but I am beginning to think this may be beyond me right now. I can't believe nobody else has written a quick little script to do just this. I'm not used to coming up against such a brick wall when I want to do something that seems pretty simple on the face of it. I looked at the ENCODE module; it may do this. I've never used a module before.