I do a lot of html parsing/rewriting. Machine generated, FrontPage generated (shudder), user generated, even beautifully hand knitted wfsp generated - and it always ends in tears.
50KB or 50 bytes, I don't care. "Where's the parser!" As with everything the more you do it the quicker it gets.
And anyways my one offs (in such a case as the OPs) always lead to "I wonder if any pages didn't have any links?", "How many?", "Which ones?", "Most popular link/least popular link?" And then, as night follows day, "What about a nice report? Sorted by file name/frequency and link/frequency?" You just can't predict what the site owner is going to come up with next. :-)
I've settled on HTML::TokeParser::Simple because I think it is as writable as it is readable (Ovid++), far more writable, readable, robust etc. than any regex is going to be.
Oh, and by the way, what's the command line? :-)
update:
As I was writing this the node expanded a tad! I agree with many of the latter points but my main view still stands.
In reply to Re^4: Replacing Text
by wfsp
in thread Replacing Text
by pyro.699
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |