in reply to Re^2: Solved... (was: Re: Yet another Encoding issue...)
in thread Yet another Encoding issue...

So-called "AI" has to tokenise its input from a stream of characters, into integers. This isn't restricted to breaking by word, but can do by letter, or indeed byte.

I would expect the problem here is because the ChatGPT's tokenisation, being made by Americans, isn't careful to avoid splitting across UTF-8 characters. That will make any training based on such tokens susceptible to errors.

  • Comment on Re^3: Solved... (was: Re: Yet another Encoding issue...)