in reply to Re^2: sentence-safe chop heuristics?
in thread sentence-safe chop heuristics?

Many excellent points, Grundle; I could almost say, "a grundle of excellent examples of cases where my preceding post fails horribly.

But -- perhaps my point was not made sufficiently blatant: the OP's requirements are unlikely to be met by any "lightweight" approach or simple algorithm. Either will tend to produce simple minded output.

As a Not_a_Number noted high up in this thread, Lingua::EN::Sentence may be a better choice (your added note regarding training is likely to be helpful to the OP) but unless I've missed something there (certainly possible, as I've only scanned it quickly), dealing with html entities is going to take a lot of extending.

Replies are listed 'Best First'.
Re^4: sentence-safe chop heuristics?
by Grundle (Scribe) on Apr 19, 2007 at 15:25 UTC
    Yes, you are absolutely correct! When dealing with HTML entities this process should be done in two steps.

    Step 1: Data extraction - Use an HTML Parser to pull out all of the data first, so that it can be represented in a humanly readable format.

    Step 2: Sentence extraction - Use your sentence parser to break the humanly readable information up into separate sentences.

    The problem becomes even more exacerbated when you have to also consider different tagging formats such as XML and its many variants, or an SGML standard, etc. etc. ad. nauseum.

    Here is another thought I had recently. Would it be possible to write a Grammar and use RecDescent to pull out sentences? I really haven't investigated it thoroughly yet, but I thought it might be an interesting exercise.