jdetloff has asked for the wisdom of the Perl Monks concerning the following question:

I've been trying to create a script that can gather some information off of a web page for me. After asking around, I was told Perl was my best bet, so I bought a book and read it. However I still have some questions about how to complete my task.

My goal is to log onto an online game, navigate through some pages, read some information, and save it to a text file.

I was able to successfully log on to the game by using WWW::Mechanize, and I know how to print data from an html file to a text file, but I found that the game screen is made up of 4 frames, each displaying a .php file.

So my question is this: how can I parse links from a .php file, or copy data displayed from a .php file? Is there a seperate module I should look for? Can I accomplish this with a clever use of LWP?

Replies are listed 'Best First'.
Re: Parsing Links from .php
by Corion (Patriarch) on Jan 10, 2010 at 20:57 UTC

    When scraping a website with frames, you will need to retrieve each frame. Other than that, websites generated through PHP (or rather, with an URL that contains the string php) are no different than websites served with other strings in their URLs. WWW::Mechanize is your best bet there.

      Thanks for the reply!

      Can you be a bit more specific about what you mean when you say "retrieve each frame"? WWW::Mechanize can return them as links, which take me to their source, but if I do this I lose the other frames.

      Is there a way to deal with each of them without navigating away from the others? In the situation I'm working with the links in one frame effect what is displayed in another frame. I need to navigate with one frame, and still be able to read data or follow further links in the other.

        There is no way to retrieve the content of each frame other than navigating to it. If you want to keep multiple frames "open" at the same time, just ->clone your WWW::Mechanize object(s) and navigate the clones to the respective frames. Also, you can visit a frame and then go ->back() to return to the page linking to the frame.

Re: Parsing Links from .php
by planetscape (Chancellor) on Jan 10, 2010 at 22:01 UTC

      Those look like they'll be a lot of help, thanks for the resource!

Re: Parsing Links from .php
by SilasTheMonk (Chaplain) on Jan 10, 2010 at 22:05 UTC

    It is irrelevant whether or not is PHP. PHP generates HTML and it is the HTML you need to parse. Using Apache's rewrite rules I could generate HTML using perl but make them look like PHP generated output. A certain sort of webmaster might even want to do that to confuse hackers.

    WWW::Mechanize may be overkill for your needs and LWP, which the former is built upon, should be adequate. (I think where WWW::Mechanize comes in is where you need to login to a website to see content etc.)

    In an ideal world well actually some people think in an ideal world, the web would use YAML or something rather than HTML, but that is a different story...... In an ideal world, all HTML would be valid XHTML. Then you could use XML::LibXML to parse the page and off you go. In practice this is highly unlikely to be the case. You are better off using HTML::TreeBuilder to get you going. Grabbing some code from something similar (but not reuseable) you probably want something like:

    require LWP::UserAgent; require HTML::TreeBuilder; my $ua = LWP::UserAgent->new(......); my $response = $ua->get(......); if ($response->is_success) { # get the document from the web my $r = $response->decoded_content; # or whatever my $tidied_doc = HTML::TreeBuilder->new_from_content($r)->as_HTML( +); .................. } else { die $response->status_line; }

    The other problem is that if the webpage has any sort of international content it is quite likely to declare itself as being encoded in Latin-1, but to contain a mixture of Latin-1 and unicode characters.

      Thanks for the answer! I actually just got the O'Reilly Perl and LWP book, and I'll be better able to say if this works once I've read through it more, but I have a quick question right off the bat.

      Will this work for a situation where there are several frames, each with a seperate .php file, whose links and data I need to access almost simultaneously?

      The site I'm using has a frame above, to navigate through areas, which are displayed on the frame below.

        Your Iframe HTML will probably look something like:
        <iframe name="FRAME1" src="id77.htm" width="730" height="360" framebor +der="0"></iframe>
        You need to pull out those iframe elements and do another LWP::UserAgent::get on the src attributes. It is a bit like writing a script that reads a webpage and does stuff with the images. However instead of an image you have another round of HTML to parse. I am not sure what you mean by "almost simultaneously".