I finally got output, but it looks like a kid did it. I'd like to polish it up and be able to have a script that uses WWW::Mechanize more effectively.
#!/usr/bin/perl -w use strict; use LWP::Simple; open FILE, "text1.txt" or die $!; my $url; my $text; while (<FILE>) { $text = $_; $text =~ s/\s+//; $url = 'http://www.nobeliefs.com/' . $text; print qq[ '$url' ]; $text =~ s#images/##; print "$text\n"; getstore($url, $text) or die "Can't download: $@\n"; }
How would I use chomp instead of $text =~ s/\s+//;? Nothing I tried worked.
My failure with WWW::Mechanize was almost complete. The most I could get it to do was dump the names of the images to STDOUT. How could I re-write this to avoid all the nonsense with saving to a file which I then have to read? The syntax for $mech->images is: Lists all the images on the current page. Each image is a WWW::Mechanize::Image object. In list context, returns a list of all images. In scalar context, returns an array reference of all images. I tried a dozen different things, but I don't get why this is not list context:
#!/usr/bin/perl -w use strict; use WWW::Mechanize; open FILE, "text2.txt" or die $!; my $domain = 'http://www.nobeliefs.com/nazis.htm'; my $m = WWW::Mechanize->new; $m->get( $domain); my @list = $m->images(); print "@list \n"; #$m->text(); #$m->content( format => 'text2.txt' ); #print FILE $m; close FILE;
In reply to Re^3: downloading images from a webpage
by Aldebaran
in thread downloading images from a webpage
by Aldebaran
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |