In what sense is using length to count Unicode characters a bug waiting to happen, though?
It's a "bug waiting to happen" when you try to make meaningful inferences about Unicode text by computing the size in bytes of the text in a specific Unicode character encoding scheme (e.g., UTF-8). This is what another monk was hinting at doing earlier in this thread when he or she suggested "dividing by two." That's a bug waiting to happen.
In general, when dealing with Unicode text, you're much more likely to need to know the numbers of code points in a string, or the numbers of graphemes in it ("extended grapheme clusters" in Unicode standardese). However, there are situations in which you might need to know the length in bytes of a Unicode string in some specific encoding. An example of this is needing to store character data in a database column with a capacity measured in bytes rather than in Unicode code points or graphemes. If you have a character data type column with a capacity of, say, 255 bytes, then the number of UTF-8 encoded Chinese characters you can insert into the column is likely going to be a lot fewer than the number of UTF-8 encoded Latin characters you can insert into the same column. In this case, knowing the size of the string in code points or graphemes won't help you answer the question "Will it fit?" You need the size in bytes.