paola82 has asked for the wisdom of the Perl Monks concerning the following question:

I've tried to modify my code, but it prints only the first two link, I need it to go to these links and then download a file from these links...because I extract the links from the main page, then I ask to go to the links, but I can do it If I have only one, I want to tell to go to the links, and then to download from another links...but I can't make it do repeating my cicle of action...I don't know if my english is good to be understand.... :-( sorry if not, I'm trying to improve also it..I'll paste the code

#!/usr/local/bin/perl use strict; use warnings; use LWP::Simple; my $url3 = "http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum +/GetPage.pl?pdbcode=2j6p&template=ligands.html&l=1.1"; my $content =get ($url3); use HTML::TreeBuilder; my $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href; my $href; my @anchors = $p->look_down(_tag => q{a}); for my $anchor (@anchors) { my $txt = $anchor->as_text; if ($txt=~ /EPE\s/) { print $txt, qq{\n}; $href = $anchor->attr(q{href}); print $href, qq{\n}; chomp ($href); push @href, $href; #now I need to go to the link where there are my EPE ligand and then +parse and extract the link of the RunLigplot.pl that is the output of + a program LigPlot, written in perl, is a postcript, and I need that +file as a script to extract info....I need to repeat these parsing fo +r every link,for every EPE.... } } $p->delete; for my $param ( @href ) { my $content = get ( "http://www.ebi.ac.uk$param" ); process_content ($content); # << This is missing from your code !! +! } sub process_content { my $content = shift; $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href0; my @anchors0 = $p->look_down(_tag => q{a}); for my $anchor0 (@anchors0) { my $href0 = $anchor0->attr(q{href}); if ($href0=~ /ligplot\d\d_\d\d'/) { print $href0, qq{\n}; push @href0, $href0; for my $param0 ( @href0 ) { $content = get ( "http://www.ebi.ac.uk$param0" ); my $content = shift; print my $param0; #I need to download files from every link $param0... my @files = (["http://www.ebi.ac.uk$param0", "$pdb.$param0.pl"], ); for my $duplet (@files) { mirror($duplet->[0], $duplet->[1]); } } $p->delete; } } }

before...

Hi dear monks, I promise this is my last question for today...I'm driving a little crazy :-( .I want to repeat the cicle of parsing for every files that I need...I'll paste a little part of my long code with my comment in english (so you can understand...because they were in italian before...)

#!/usr/local/bin/perl use strict; use warnings; use LWP::Simple; my $url3 = "http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum +/GetPage.pl?pdbcode=2j6p&template=ligands.html&l=1.1"; my $content =get ($url3); use HTML::TreeBuilder; my $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href; my @anchors = $p->look_down(_tag => q{a}); for my $anchor (@anchors){ my $txt = $anchor->as_text; if ($txt=~ /EPE\s/){ print $txt, qq{\n}; my $href = $anchor->attr(q{href}); print $href, qq{\n}; chomp ($href); push @href, $href; #now I need to go to the link where there are my EPE ligand and then +parse and extract the link of the RunLigplot.pl that is the output of + a program LigPlot, written in perl, is a postcript, and I need that +file as a script to extract info....I need to repeat these parsing fo +r every link,for every EPE.... my $i=1; for my $href(@href) {my $url4= "http://www.ebi.ac.uk$href[$i]; $content=get($url4); $i=$i+1; } } } $p->delete; $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href0; my @anchors0 = $p->look_down(_tag => q{a}); for my $anchor0 (@anchors0){ my $href0 = $anchor0->attr(q{href}); my $txt4 = $href0->as_text; if ($txt4=~ /ligplot\d\d_\d\d'/){ print $txt4, qq{\n}; push @href0, $txt4; } } $p->delete; }; my $u=1; foreach $txt4(@href0){ my $url5= "http://www.ebi.ac.uk$txt4[$u]"; $u=$u+1; #I need to download every file of EPE (as http://www.ebi.ac.uk/thornto +n-srv/databases/cgi-bin/pdbsum/RunLigplot.pl?pdb=2j6p&amp;file=ligplo +t04_01 and http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum +/RunLigplot.pl?pdb=2j6p&amp;file=ligplot04_02) in this case, these ar +e my $url5 use LWP::Simple; my @files = (["$url5", "2j6p.$u.pl"], ["http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/ +GetText.pl?pdb=2j6p&chain=A&seq_fasta=1?pdb=2j6p&chain=A&seq_fasta=1" +, "$path/$Dir/2j6p.seq.fasta"] ); for my $duplet (@files) { mirror($duplet->[0], $duplet->[1]); }

there'll be a synthax error because it wants me to puy "my" befor my var, but I've already put it and I don't actually know what does it want...maybe anyone can help me...sorry I'm going to take headache. Thanks a lot.

Replies are listed 'Best First'.
Re: to repeat action for a cicle of files
by NetWallah (Canon) on May 25, 2009 at 16:14 UTC
    The most obvious bug I see is in this section of your code:
    my $i=1; for my $href(@href) {my $url4= "http://www.ebi.ac.uk$href[$i]; $content=get($url4); $i=$i+1; } }
    Style-wise, I would write it like:
    for my $param ( @href ){ my $content = get ( "http://www.ebi.ac.uk$param" ); process_content ($content); # << This is missing from your code !!! } .... sub process_content{ my $content = shift; .. Build HTML tree etc ... }
    In your code, $content is being overwritten for each href.

         Potentia vobiscum ! (Si hoc legere scis nimium eruditionis habes)

      Hi, thanks for your answer, I've tried to modify my code, but it prints only the first two link, I need it to go to these links and then download a file from these links...because I extract the links from the main page, then I ask to go to the links, but I can do it If I have only one, I want to tell to go to the links, and then to download from another links...but I can't make it do repeating my cicle of action...I don't know if my english is good to be understand.... :-( sorry if not, I'm trying to improve also it..I'll paste the code

      #!/usr/local/bin/perl use strict; use warnings; use LWP::Simple; my $url3 = "http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum +/GetPage.pl?pdbcode=2j6p&template=ligands.html&l=1.1"; my $content =get ($url3); use HTML::TreeBuilder; my $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href; my $href; my @anchors = $p->look_down(_tag => q{a}); for my $anchor (@anchors) { my $txt = $anchor->as_text; if ($txt=~ /EPE\s/) { print $txt, qq{\n}; $href = $anchor->attr(q{href}); print $href, qq{\n}; chomp ($href); push @href, $href; #now I need to go to the link where there are my EPE ligand and then +parse and extract the link of the RunLigplot.pl that is the output of + a program LigPlot, written in perl, is a postcript, and I need that +file as a script to extract info....I need to repeat these parsing fo +r every link,for every EPE.... } } $p->delete; for my $param ( @href ) { my $content = get ( "http://www.ebi.ac.uk$param" ); process_content ($content); # << This is missing from your code !! +! } sub process_content { my $content = shift; $p = HTML::TreeBuilder->new; $p->parse_content($content); my @href0; my @anchors0 = $p->look_down(_tag => q{a}); for my $anchor0 (@anchors0) { my $href0 = $anchor0->attr(q{href}); if ($href0=~ /ligplot\d\d_\d\d'/) { print $href0, qq{\n}; push @href0, $href0; for my $param0 ( @href0 ) { $content = get ( "http://www.ebi.ac.uk$param0" ); my $content = shift; print my $param0; #I need to download files from every link $param0... my @files = (["http://www.ebi.ac.uk$param0", "$pdb.$param0.pl"], ); for my $duplet (@files) { mirror($duplet->[0], $duplet->[1]); } } $p->delete; } } }
Re: to repeat action for a cicle of files
by wfsp (Abbot) on May 26, 2009 at 13:01 UTC
    I think this might be something close to what you're after.
    #!/usr/local/bin/perl use strict; use warnings; use LWP::Simple; use HTML::TreeBuilder; my $url = "http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/ +GetPage.pl?pdbcode=2j6p&template=ligands.html&l=1.1"; my $content =get ($url); my $p = HTML::TreeBuilder->new; $p->parse_content($content); my @hrefs; my $href; my @anchors = $p->look_down(_tag => q{a}); for my $anchor (@anchors){ my $txt = $anchor->as_text; if ($txt=~ /EPE\s/){ $href = $anchor->attr(q{href}); push @hrefs, $href; } } $p->delete; for my $href (@hrefs){ my $url = join(q{}, q{http://www.ebi.ac.uk}, $href); print qq{$url\n}; my $content = get($url); my $p = HTML::TreeBuilder->new; $p->parse_content($content); my @anchors = $p->look_down(_tag => q{a}); for my $anchor (@anchors){ my $href = $anchor->attr(q{href}); if ($href =~ /ligplot\d\d_\d\d/){ # e.g. ligplot04_01 next if $href =~ /pdf/; my $url = join(q{}, q{http://www.ebi.ac.uk}, $href); print qq{\t$href\n}; my $content = get($url); #print $content; # do something with the postscript file } } $p->delete; } print qq{done\n};
Re: to repeat action for a cicle of files
by ig (Vicar) on May 26, 2009 at 21:08 UTC

    In addition to the excellent suggestions you have already received, I thought you might like to see what the errors in your own script are and how to correct them.

Re: to repeat action for a cicle of files
by poolpi (Hermit) on May 26, 2009 at 14:45 UTC

    This piece of code download the two images you need, at least i hope ;)

    #!/usr/bin/perl use strict; use warnings; use WWW::Mechanize; $|++; my $site = 'http://www.ebi.ac.uk'; my $start = '/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=2j6p&templa +te=ligands.html&l=1.1'; my $img_string = 'ligplot\d{2}_\d{2}'; my $m = WWW::Mechanize->new( autocheck => 1 ); $m->get( $site . $start ); my @links = @{ $m->find_all_links( tag => 'a', text_regex => qr/^EPE\s/ ) }; for my $link (@links) { print $link->text, "\n", $link->url, "\n"; $m->follow_link( tag => 'a', text_regex => qr/^EPE\s/ ) or die "can't follow link"; my $img = $m->find_link( url_regex => qr/$img_string/ ); $img->url =~ /($img_string)/; next unless defined $1; print " Fetching $1...", $m->mirror( $img->url_abs, $1 )->message, "\n"; }


    hth,
    PooLpi

    'Ebry haffa hoe hab im tik a bush'. Jamaican proverb