in reply to How are regex character classes implemented?

At the risk of engaging in premature optimization, I would be inclined to suggest tuning for the common case, i.e. 7-or 8-bit ASCII. Setting up a bitmap for that is obviously straightforward. In practical terms you probably have to go with the list-of-pairs approach for the rest of the character set, since Unicode code points extend up to 0x10FFFF and I doubt you want to maintain a (very sparse) 1.1-megabit bit map for each character class.

And yes, that's 0x10FFFF, not 0xFFFF. Unicode is not limited to representing 64K characters, and something like 74K code points have actually been defined at this point. Short-term you could probably get away with only worrying about the lower 64K code points (the "basic multilingual plane"), but it's not really a compliant Unicode implementation if you go that route.

  • Comment on Re: How are regex character classes implemented?

Replies are listed 'Best First'.
Re: Re: How are regex character classes implemented?
by John M. Dlugosz (Monsignor) on Jul 19, 2002 at 18:15 UTC
    Yes, I was thinking that for a general-purpose component having sets that only contain smaller ranges are easily done with polymorphism (different representations for different cases), but it should also efficiently handle when the character to test is most-often in the ASCII/ANSI subrange, too.

    My favorite doodle right now is a class with 4 pointer members: small_low points to a 16-byte bitmap for 0..127, likewise small_high for 128..255. large points to an array of arrays that handle 16-bit characters, and huge points to a deeper chain of arrays of arrays for 31-bit characters.

    For testing, the small_low and small_high bitmaps are one pointer away, even if the large bitmap is populated. It doesn't have to chace down multiple levels after seeing that the high byte is zero.

    The previous favorite design is a list of numbers. Even entries are starting points of a range (inclusive) and odd entries are ending points (exclusive). Do a binary search in the array and see if it's part of an "on" range or an "off" range.

    That is good for large collections with few distinct runs, but uses a different algorithm than the ASCII case. The first one I mentioned is the same throughout, just different tree depths.

    —John

Re: Re: How are regex character classes implemented?
by theorbtwo (Prior) on Jul 19, 2002 at 06:14 UTC

    Moreover, the limitation to "only" 0x10ffff codepoints isn't completly technical. utf8 can support up to 2**(2**8) codepoints, IIRC.


    Confession: It does an Immortal Body good.

      UTF-8 supports 2**256 codepoints?! I don't think so.

      11111110 is the largest byte-count the first byte can encode, so that's followed by 7 groups of 6 bits, or 42 bits total.

      The ISO-646 character set is defined on 31 bits. I guess they didn't want to worry about signed/unsigned, or perhaps wanted to leave a bit for the user? Anyway, it's certainly enough.

        11111110 is the largest byte-count the first byte can encode, so that's followed by 7 groups of 6 bits, or 42 bits total.

        If I understand the Unicode spec properly, there's an important distinction between Unicode code points (what we tend to think of as characters) and Unicode encodings, e.g. UTF-8. The current version of Unicode defines "only" 0x10FFFF code points or possible characters, which they claim should be more than enough to handle every character in every modern and historical language every written.

        There are then a variety of transformation formats defined for representing Unicode code points as actual bytes/octets:

        • UTF-8: a variable-length encoding in which Unicode code points 0-127 (also ASCII chars 1-127) are represented by a single octet, and other code points are represented using from 2 to 6 octets. Used by Perl internally and also intended for places like HTML documents where reducing file size and transmission time for the common case is particularly desriable.
        • UTF-16: a two-octet encoding that can represent about 63K Unicode codepoints, including large numbers of the CJK (Chinese-Japanese-Korean) unified ideograms. Some octet values in UTF-16 are reserved for surrogate pairs, in which two sequential codepoints represent one of the codepoints larger than 0xFFFF.
        • UTF-32: a four-octet encoding scheme that represents every Unicode codepoint without any form of escaping or surrogates. This does not, however, mean that there are actually 2^32 possible Unicode codepoints -- despite having 32 bits to work with, UTF-32 values larger than 0x10FFFF are explicitly illegal. (See here.)