I may get a bit of flak for this as I know micro-optimization is frowned upon basically everywhere...but technically this isn't entirely about optimization.
You see, I have two options here. I could type Unicode characters using a hex sequence \x{...} (tedious to lookup) XOR I could enable the utf8 pragma and type Unicode characters via mostly copy & paste or some other method into the script (also tedious).
I know the general rule is to avoid micro-optimizing and go with readability/maintainability--but here, I have two tedious methods--one easier to type, harder to read, and one harder to type, easier to read. Having 'wide' characters is UNavoidable in my case. So I've decided on the 'which generally performs better' route, even though I realize that the impact is going to be miniscule. Nonetheless--my question is: does use utf8 affect script performance more than hex escape sequences (at the lowest possible level)?
Sidenote: I have only a total of about 20 multibyte characters in the script currently in total (slighter greater than 50 bytes) at the moment. I would say that the script is very unlikely to have more than double this number
EDIT: Accidentally said 'avoidable', when I mean UNavoidable. My apologies!
In reply to Other effects of 'use utf8' on script by YenForYang
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |