in reply to Re^5: collect data from web pages and insert into mysql
in thread collect data from web pages and insert into mysql

I haven't tried it yet but saw some things I'd like to comment about (if I've understood the code correctly, this new one was a bit above my level).

sub get_next_page{ my ($t) = @_; # we want table 9 my @tables = $t->look_down(_tag => q{table}); my $table = $tables[8]; # first row my @trs = $table->look_down(_tag => q{tr}); my $tr = $trs[0]; # second column my @tds = $tr->look_down(_tag => q{td}); my $td = $tds[1]; # get any text my $page_number_txt = $td->as_text; # and test if it is a page number # will be undef otherwise my ($page) = $page_number_txt =~ /PAGE (\d) >/; return $page; }

If I understand correctly you load next page, go through the source code to a particular spot on page and looks at page number? This will fail for my scenario as the server keeps serving the page number you request even if it contains no data so no matter what page number you enter it will give you a valid answer. I just used last if $content =~/No sorties/ ; which seems to do the trick

if ($href){ # test for a sid in the query fragment my $uri = URI->new($href); my %q = $uri->query_form; # save it if it is there push @sids, $q{sid} if exists $q{sid}; }

Guess this would be the perfect place for the second loop-exit condition, we want to stop processing sids when we find the last one previously processed ($lproc).
This variable needs to be read from the pid list file as well (not sure what delimiter to use, what is best, tab or semi colon?), instead as now one number per line the actual DB export will contain two numbers per line (pid and lproc).

Q: Where does the sids end up?

Going to try it now and will be back with more comments. I really appreciate your help with this!

Replies are listed 'Best First'.
Re^7: collect data from web pages and insert into mysql
by wfsp (Abbot) on Aug 02, 2010 at 14:23 UTC
    When you go to the first page the top right side says, "PAGE 2 >". When you click on that you're on page 2. Then the top right hand side says, "PAGE 3 >". On page three there is nothing (there isn't a next page).

    What that sub (get_next_page) does is to check if there is a link to a next page. If there is it returns the page number and that is the page that is processed next. If there isn't a page number it returns undef and that exits you out of the

    while ($page){
    loop. With hindsight I should have called the sub get_next_page_number because that is what it is doing (it's not loading the page).

    The sub (get_sids) returns a list of all the sids. I reckon it would be simplest to do that and then decided which ones you want. grep might help with that. A tab delimited record sounds as thought it would do fine.

    By the way, there are, in this case, three calls to the website. So you have to give it a moment to finish.

    Let us know how you get on.

      Ah! Now i get it, smart thinking! So that solves condition 1 then.

      As for condition 2 I don't think first fetching all and then sort it is the way to go as people can rack up many hundreds of sorties in quite short time which means processing dossens of pages even if the lproc was on page 1.

        Adjust the way the get_sids() sub is called
        my $lproc = 621557; my @sids = get_sids($url, $pid, $lproc);
        Change the sub
        sub get_sids{ my ($url, $pid, $lproc) = @_; my $page = 1; my $uri = URI->new($url); my ($i, @sids); while ($page){ # build the uri $uri->query_form(page => $page, pid => $pid); my $uri_string = $uri->as_string; # get the content, check for success my $content = get $uri->as_string; die qq{LWP get failed: $!\n} unless $content; # build the tree my $t = HTML::TreeBuilder->new_from_content($content) or die qq{new from content failed: $!\n}; # get a list of all anchor tags my @anchors = $t->look_down(_tag => q{a}) or die qq{no tables found in : $!\n}; # look at each anchor my $more = 1; # flag for my $anchor (@anchors){ # get the href my $href = $anchor->attr(q{href}); if ($href){ # test for a sid in the query fragment my $uri = URI->new($href); my %q = $uri->query_form; my $sid = $q{sid}; next unless $sid; # exit the while loop if it # is the last processed sid $more--, last if $sid == $lproc; # otherwise save it push @sids, $sid; } } last unless $more; # see if there is another page $page = get_next_page($t); # avoid accidental indefinite loops # hammering the server, adjust to suit die if $i++ > 5; } # send 'em back return @sids; }
        Have a look at the URI docs to see what the $uri->query_form does. Very useful.

        Update: corrected the sub