in reply to Understanding CGI.pm and UTF-8 handling

HTTP is all about transfering documents. The type of document is indicated by the Content-Type. Some examples are text/html and image/jpeg. HTTP doesn't know, care or have any fields to indicate the character encoding used by the document. Technically, that doesn't even make sense because a document could use multiple forms of encoding. It's up to the document to provide any informatino the receiver needs to interpret it.

Which brings us back to application/x-www-form-urlencoded. Form data is just another document to HTTP, since that's the only thing it understands. And it's a pretty aweful document format for international information exchange. It doesn't provide any information on the character encoding used, so CGI doesn't have any information on which to act. (I wonder if multipart/form-data is better at this.)

One backward-compatible solution would be to allow/require the encoding to be specified as Content-Type parameters, just like HTML does (e.g. text/html; charset=ISO-8859-5). This was never done.

Instead, HTML provides a means of requesting a specific character set by means of the accept-charset parameter of the FORM element*. Any correct browser will encode the data using the specified encoding. The recommended default is the same encoding as the one of the page containing the form.

* — You can technically specify multiple encodings to let the browser pick one, but the browser has no way of communicating which encoding it used. A trick you could use is to include a string in a hidden field which gets encoded differently by each character encoding. For example, the BOM character would distinguish the various UNICODE encodings.