Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Retrieving contents of web pages

by OeufMayo (Curate)
on Aug 29, 2001 at 04:43 UTC ( [id://108649]=note: print w/replies, xml ) Need Help??


in reply to Retrieving contents of web pages

If you want to avoid the complexity of LWP and other HTML parsing modules, you may want to look at WWW::Chat, which is one of the easiest way to navigate through websites with perl. This modules creates LWP + HTML::Form scripts automatically via the webchatpp program.
There's still some features missing in this module, but it usually does a fair job. And more features may be added soon!

A simple example webchatpp script of what you want may look like this:

GET http://www.mysite.com/loginpage.html EXPECT OK FORM login F login=OeufMayo F password=s33kret CLICK EXPECT OK FOLLOW /Interesting link/ EXPECT OK for (@links){ print join("\n", map{ "@$_[1]\n\tURL: @$_[0] "} @links); }

Pretty simple, isn't it?

<kbd>--
my $OeufMayo = new PerlMonger::Paris({http => 'paris.mongueurs.net'});</kbd>

Replies are listed 'Best First'.
Re: Re: Retrieving contents of web pages
by RayRay459 (Pilgrim) on Aug 29, 2001 at 20:05 UTC
    OeufMayo, thank you very much for your sample code. That looks like it may work. I'll look into it deeper and probably post code if i get it to work. Thanks again.
    Ray

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://108649]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (5)
As of 2024-04-24 18:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found