I would expect the problem here is because the ChatGPT's tokenisation, being made by Americans, isn't careful to avoid splitting across UTF-8 characters. That will make any training based on such tokens susceptible to errors.
In reply to Re^3: Solved... (was: Re: Yet another Encoding issue...)
by etj
in thread Yet another Encoding issue...
by Bod
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |