80degreez has asked for the wisdom of the Perl Monks concerning the following question:

use strict; use WWW::Mechanize; use HTTP::Cookies; my $mech = WWW::Mechanize->new( agent =>"Mozilla/5.0 (X11; U; Li +nux i686; en-US; rv:1.7) Gecko/20040918 Firefox/0.9.3 "); my $usr = "80degreez\@gmail.com"; my $pw = "mypw"; my $currPage = 1; my $numPro = 0; my $loopa = 0; my $fn = 2; my $proList = ">>profiles.out"; open (OUT, ">>file.dump" ); $mech->get("http://indiecharts.com/"); $mech->cookie_jar( HTTP::Cookies->new() ); if( $mech->content() =~ /<form Method="Post" action="login.asp"> / ){ $mech->form_number($fn); $mech->field( UserName => $usr ); $mech->field( Password => $pw ); $mech->submit(); if ($mech->success() ) { if ( $mech->content() =~ /Pick your IndieCharts Name/ ) { print "User $usr logged in successfully!\n" ; do { $mech->get("http://indiecharts.com/indie_Music +_Artists.asp?Keyword=&Page=$currPage&butname="); my @profiles = $mech->find_all_links( t +ext_regex => qr/^d{9}$/ ); chomp @profiles; foreach my $parse (@profiles) { if (@profiles){ $numPro++; open (OUT, $proList) || p +rint "cant open profiles file!"; print OUT "$parse\n"; } } print "$currPage $numPro\n\n"; $currPage++; } while ( $currPage <= 500 ); } else { print " User $usr was unable to log in successfully!\n"; } } } else { print $mech->content; } close(OUT);
no this is not for spam, it's just the way im attempting to familiarize myself with perl pattern matching. Logs in correctly, fetches page correctly, doesn't match/parse links correctly

Replies are listed 'Best First'.
Re: Page Scraping
by kyle (Abbot) on May 01, 2007 at 20:37 UTC

    Your code:

    my @profiles = $mech->find_all_links( text_regex => qr/^d{9}$/ );

    ...says to find links that contain the letter 'd' nine times (d{9}). Maybe you meant to match nine digits (\d{9}) instead?

    Also, the WWW::Mechanize documentation says that find_all_links returns a list of WWW::Mechanize::Link objects. These would not be suitable to pass to chomp. The loop after that should probably start out something like:

    foreach my $parse ( map { $_->url() } @profiles) {

    This way you'll loop over the URLs that it found (rather than the objects).

Re: Page Scraping
by akho (Hermit) on May 01, 2007 at 20:28 UTC
    That's because you're matching link text, not link url. And the regexp is wrong. Use $mech->find_all_links( url_regex => qr/\d{9}/ );.
Re: Page Scraping
by akho (Hermit) on May 01, 2007 at 20:44 UTC
    As much as I understood, you don't actually need to login to see the needed pages. Otherwise, this seems to do what you want to do (or something resembling it):

    use strict; use warnings; use Fatal qw/ open close /; use WWW::Mechanize; use Carp; my $mech = WWW::Mechanize->new( autocheck => 1 ); open my $pro_list, '>>', 'profiles.out'; for my $curr_page (1..3) { $mech->get("http://indiecharts.com/indie_Music_Artists.asp?Key +word=&Page=$curr_page&butname="); my @artist_links = $mech->find_all_links(url_regex => qr/\d{9} +/); print scalar @artist_links, " matching links on page $curr_pag +e\n"; for my $artist_link (@artist_links) { print $pro_list $artist_link->text(), "\n"; } }
      Akho, that kinda worked, but it retrieved the artist names instead of the butname= id #
        You could work this out yourself, but

        $artist_link->url() =~ /(\d{9})/; print $pro_list $1, "\n";

        should help.

        Ask if you don't understand some parts of my script.