Okay, I had this program for work that would go through each file, recursing down the directory, that would take it in, grab info from it, do a few things, such as a word count, which needed the HTML gone, and output some SQL. And deciding that there is no real good regex method to strip HTML out (that would allow me to sleep at night), I decided to go with what is listed in the cookbook., with a slight modification, as listed below:
$clean = HTML::FormatText->new->format(parse_html($html)) if ($html = +~ m/<[^>]+>/);

Basically, it will only strip out the HTML, if it has some semblance of a HTML tag. To check if this worked as anticipated, I did some profiles and benchmarking, and found it speed up the script on documents that had NO html from 2 minutes to run, to 30 seconds. (Sorry, this was a few months ago, and don't have the results of the profile anymore.) I also ran this on some files that were mixed, and found it speed it up from 1 minute to 45 seconds. Not as huge of an increase as the other, but it works.

I then learned a great skill, why munge data, when it is not needed, and in this case, load up HTML::FormatText and HTML::Parse.