in reply to Re^2: HTML from single, double and triple encoded entities in RSS documents
in thread HTML from single, double and triple encoded entities in RSS documents

Is that how you read the OPs intent? I thought about it, but if the requirement is to retain the final level of entities, then his hardcoded, 3 decodes is going belly up whenever he processes any that has been encoded less than 3 4 times.

Even so, the logic of testing for a change in length works. You just have to retain 2 levels of 'undo' at each iteration. If the data being processed isn't too many megabytes each time, then something as simple as this would work regardless of how many times the content has been entity encoded:

#! perl -slw use strict; use HTML::Entities; my $data = '<p><b><i>AT&amp;T &lt;grin></i></b></p>'; $data = HTML::Entities::encode( $data ) for 1 .. rand( 10 ); my @saved = $data; my $l1 = length $data; { my $l2 = length( $data = HTML::Entities::decode( $data ) ); if( $l2 < $l1 ) { push @saved, $data; $l1 = $l2; redo; } } $data = $saved[-2]; print $data; __END__ P:\test>junk2 <p><b><i>AT&amp;T &lt;grin></i></b></p> P:\test>junk2 <p><b><i>AT&amp;T &lt;grin></i></b></p> P:\test>junk2 <p><b><i>AT&amp;T &lt;grin></i></b></p>

I still think that the logic shown in the OPs code $title =~ s/strip_stuff_like_html_and_cdata_tags//g;, plus his description

Before working on the text we find inside title tags

suggests that he is interested in manipulating the content, not the markup.

And that if this is ever destined to be redisplayed in a browser, (of which I see no mention?), it will probably be in a completely different context to that from which it was fetched.

Which suggests to me that it would be better to extract the text content, remove all entities to allow for DB storage, pattern matching etc. and if it is ever going to be redisplayed in a browser, re-encode the content before combining it with the new markup.

But you could be right.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^4: HTML from single, double and triple encoded entities in RSS documents
by Aristotle (Chancellor) on Jan 07, 2006 at 23:40 UTC

    Unfortunately, that does not work either. Try this as sample input data:

    my $data = 'But what about the &amp;amp; entity? &lt;sigh>';

    That needs to stay as it is, but you will find that it gets over-decoded into But what about the &amp; entity? <sigh>.

    It’s impossible to reliably infer what the data means from looking at the data itself.

    Really.

    Sorry. :-(

    I still think that the logic shown in the OPs code […] plus his description […] suggests that he is interested in manipulating the content, not the markup.

    Sure, but he must first reliably identify which parts are markup and which are not, so that he can strip markup without stripping content. After stripping markup, then you can decode once more to resolve entities to characters. But if he over-decodes &lt;sigh> to <sigh> in the first step, he’ll end up stripping it even though it was content.

    There is just no way around it: you do not and cannot know what the data means. It may seem mind-boggling that a technology with such wide adoption has such a fundamental and unresolvable flaw, but it’s true.

    (So anyone reading this who is planning to deploy syndication feeds: in the name of the sanity of feed reader developers, I implore you, please use the Atom format to publish, not RSS. You’ll do everyone a favour – including yourself and your readers.)

    Makeshifts last the longest.

      So what you're saying is, RSS is broken and until the entire world adopts the ATOM format or something similar, there is simply no point in trying to make any sense of any of the existing RSS feeds?

      Strikes me that is the same argument that said "Only perl can parse Perl", and so nobody tried. Until one day along comes someone who'd either never heard that, or simply decided to go ahead and try anyway, with the result we have PPI.

      I realise that there are still some things that PPI won't handle, so the original missive is correct, but from what I've seen, the occassions when it would fail are the same occasions when if the code was posted here, everyone would be throwing their arms up saying: For the sake of your own sanity and that of maintenance programmers everywhere, "Don't do that!".

      For the vast majority of code created for anything other than deliberate obfu purposes, PPI seems to be able to a pretty fine job.

      Given the way the things work--there are still sites out there generating pre-HTML 2.0 markup; adoption of new standards always takes a long time--, don't you think that there is some scope for doing the best you can with what is available now?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        The best you can do is decode exactly twice and write off feeds where that’s not good enough as collateral damage. That will cover 98% or so for the feeds in the wild; the remaining 2% are simply unreclaimable, and whatever clever approach you employ as you try will just break a different (and larger) set of feeds.

        The analogy to PPI is not very useful. Perl code has a precise meaning that can be read out unambiguously, even if it’s extremely hard to achieve that. In contrast, RSS titles do not have precise meaning to begin with.

        As for Atom, I have no illusions about what it means for writing software that consumes feeds: nothing, because you clearly can’t ignore the millions of deployed RSS feeds. Nor was I saying you should at any point of this diatriabe. I’m just imploring those who are only just writing feed generation code now to please make the sane choice right off the bat, so that this headache, even though it is here to stay, will at least not grow (too much).

        (Please note that it’s Atom, not ATOM: it’s neither an acronym nor a backronym and doesn’t stand for anything; it’s a proper name.)

        Makeshifts last the longest.