See Re: HTML::Strip Problem for some sample code that probably does exactly what you want (get the page with LWP and strip the text with HTML::Parser). It also notes a few of the issues with screen scraping. Redirects, metarefreshes, frames, javascript, can all work against you. Add spider unfreindly sites to the list and you will soon want LWP::UserAgent and some wrapper code to get the data you probably want.
cheers
tachyon
In reply to Re: Text from Website
by tachyon
in thread Text from Website
by new_monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |