Most of the steps you did were fine, so just a few remarks:
- If you configure the server to send an UTF-8 content-type header, then this usually only works for static files like the page where your form resides in. For CGI, you are supposed to provide your own content-type (as you did), and the server must not touch them. As of today, static pages can declare their own encoding in a meta element, e.g. <meta charset="utf-8">
- Adding accept-charset="UTF-8" is good for clarity (the default value is "UNKNOWN", but not strictly required since browsers are supposed to use the encoding of the containing form if there's no accept-charset attribute.
I don't see some other steps in your script:
- Your CGI script does not get any information about the encoding of characters from the request. If the browser sends UTF-8, then you need to decode the parameter accordingly.
- If you write UTF-8 in the response CGI script, you ought to print "Content-type: text/plain; charset=utf-8\n\n": In HTTP, the default charset is ISO-8859-1. Browsers can't infer the encoding if there's nothing in the Content-type header, so they'll display your two-byte é as two bytes, namely é.
- If you get a "wide character" warning (if you don't, then you forgot to decode), you might also have omitted to declare the encoding of your output stream. Perl also uses a single-byte encoding per default. To print Unicode characters as UTF-8, you need to do something like binmode STDOUT, ":encoding(UTF-8)";
About diagnostics: Windows Codepage 1252 and ISO8859-1 are different, but quite similar, so there's no way to distinguish between those two in this case.