But simply replacing calls to decode with calls to a wrapper function which does this check adds an additional perl function call and regex per db call, which has its own (small) overhead.
Hmm. Well, I suppose you could eliminate the extra function call, at least, by replacing each decode call with an idiom like this:
Note that every valid utf8 wide character will always have one byte with a value in the limited range of 0x80-0xBF, so that's the simplest, smallest, quickest regex match you can get to test for wide characters. If there are none, the statement short-circuits -- no function call at all (not even to decode).$string =~ /[\x80-\xBF]/ and $string = decode( 'utf8', $string );
(update: Actually, it's also true that every valid utf8 wide character must have a first byte that matches /[\xC2-\xF7]/ which is a somewhat smaller range to check.)
Even if decode worked the way that the (faulty) docs said, the use of this sort of short-circuit idiom might still be faster than calling decode on every string.
If that's still too slow and heavy for you, maybe you need to do some C/XS coding...
In reply to Re^5: Behaviour of Encode::decode_utf8 on ASCII
by graff
in thread Behaviour of Encode::decode_utf8 on ASCII
by jbert
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |