Most natural language parsers I have seen use heurisitcs plus some manual touch-up, or they use a statistical approach. For instance, in classifying email as spam or ham, Mail::SpamAssassin uses a combination of regexps and parsing to catch common spam phrases or email structure and uses a linear weighting (i.e., a perceptron) for classification. More recently, it has incorporated a a Bayesian classifier to create a customized, adaptive component to the classification.
I think you could use a similar approach here. The first thing to do is look at a bunch of these sites, and identify likely patterns to locate and extract titles. Program these in, most common to least common.
As a backup, train a Naive Bayes calssifier on the context surrounding a title string with classes of tile/no_title. After training it up, run your HTML through it and pick the title based on the most probable title context.
Whether all this work is less than that of maintaining a custom parsing of all your sites is something you will have to decide. For sorting spam, it is a clear win.
-Mark
In reply to Re: Extracting arbitrary data from HTML
by kvale
in thread Extracting arbitrary data from HTML
by vbfg
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |