http://qs1969.pair.com?node_id=1171951


in reply to Re^4: BUG: code blocks don't retain literal formatting -- could they?
in thread BUG: code blocks don't retain literal formatting -- could they?

My point was simply to suggest alternate fixes to the Perl Monks website.

Obviously, the best is to never mess with what's between code tags.

But, this would require PM to send proper, UTF8 encoded response content back to browsers.

There may be technical reason why the PM website can't do that. Possible work-arounds to that include (but not limited to):

Again, these are just alternatives to the proper solution. It would be great if PM is able to properly support UFT8 content. We may have to live with a work around.

  • Comment on Re^5: BUG: code blocks don't retain literal formatting -- could they?

Replies are listed 'Best First'.
Re^6: BUG: code blocks don't retain literal formatting -- could they?
by perl-diddler (Chaplain) on Sep 17, 2016 at 18:46 UTC
    But, this would require PM to send proper, UTF8 encoded response content back to browsers.
    Why? It works now without any extra work in normal text. The only problem is in the CODE blocks, BECAUSE, something reformats input into HTML-entities.

    To fix that, I'd first try not doing that conversion in a code block (and maybe not in text areas). I seem to remember that the HTML entities were provided to allow having "special chars" (special to HTML syntax, like "<" and "&", etc..). But characters above U+0x0079 shouldn't be a problem if they were left "untouched". To handle display of "special chars" in any of the input -- only convert them to HTML-entities on post (if necessary). I'd bet that anything above the normal ASCII range would be fine to leave untouched.

      But, this would require PM to send proper, UTF8 encoded response content back to browsers.
      Why?

      So the Content-type: header will have the correct charset= and encoding= attributes.

        I'm pretty sure that whatever PM sense (proper UTF8 encoded responses or whatever), have no effect on what the Content-type has for charset and encoding attributes. A website can set the Content-type charset and encoding attribs to whatever. That is independent of what they send in the content stream. I.e. Doing one doesn't force the other. They can both be done independently -- however, having them in agreement might be less confusing to some browsers.

        Likely what is so, is that those who are interested in UTF8 set their browsers to assume that encoding for pages that don't declare an encoding since many HTML4 websites that don't declare encoding still use UTF8 on their website -- whether by intention or by users typing in UTF8 strings that later get displayed to others. I.e. when we use UTF8, most of us see it properly as UTF8 chars on our browsers, already. What is at issue is that the code blocks convert such things into html-enties when it scans our input into the site, but it doesn't convert them on output because they are in code blocks.

        The bug is that they are converted into HTML-entities in the first place.

        Too bad no one is interested in fixing this. I guess they went AWOL... ;-)