palsy017 has asked for the wisdom of the Perl Monks concerning the following question:
This code works to download PDFs, but the files on the website have duplicate names, differentiated only by their links.my $start = "http://www.xxxx.com/programs_results/"; my $mech = WWW::Mechanize->new( autocheck => 1 ); $mech->get( $start ); my @links = $mech->find_all_links( url_regex => qr/\d+.+\.pdf$/ ); chdir $progdest or die "Can't change directory: $!\n"; for my $link ( @links ) { my $url = $link->url_abs; my $filename = $url; $filename =~ s[^.+/][]; mech->get( $url, ':content_file' => $filename ); }
The pdf names are downloaded as "_20090326.pdf."
The links from the source are in the following pattern: /pdf/100004/_20090326.pdf, /pdf/100006/_20090326.pdf etc.
So not all files are saved since some are overwritten. I believe that a different regex is needed, and that's where I am stuck.
I need the other files with the same names.
So the solution would be to get a final file name that includes more of the link such as "100004_20090326.pdf," "100006_20090326.pdf," etc.
Could you help me with the modifications to the code above that would produce this result?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Question about WWW::Mechanize
by AnomalousMonk (Archbishop) on Mar 30, 2009 at 02:43 UTC | |
|
Re: Question about WWW::Mechanize
by Anonymous Monk on Mar 30, 2009 at 02:34 UTC | |
by palsy017 (Initiate) on Mar 30, 2009 at 18:47 UTC |