Most of the time when I have to snarf a web page, I have to extract some data from it afterwards. I think it's *way* easier to do with the tools in perl than doing a
cat | sed | awk | sort | sed | diff | sed | sed | awk | sed
chain. In perl, I can assign the $response to a variable, walk through it, strip the html, the tabular data, verify it against what I've expected, and stuff it into a db - all in one program. AND I can check against any errors occuring in any of those steps.
I've known people who spend all their time in sed/awk and can whip up scripts to do everything there - and I'm sure people can do it in emacs and make and C. I choose perl. Whatever works for you.