This code works to download PDFs, but the files on the website have duplicate names, differentiated only by their links.my $start = "http://www.xxxx.com/programs_results/"; my $mech = WWW::Mechanize->new( autocheck => 1 ); $mech->get( $start ); my @links = $mech->find_all_links( url_regex => qr/\d+.+\.pdf$/ ); chdir $progdest or die "Can't change directory: $!\n"; for my $link ( @links ) { my $url = $link->url_abs; my $filename = $url; $filename =~ s[^.+/][]; mech->get( $url, ':content_file' => $filename ); }
The pdf names are downloaded as "_20090326.pdf."
The links from the source are in the following pattern: /pdf/100004/_20090326.pdf, /pdf/100006/_20090326.pdf etc.
So not all files are saved since some are overwritten. I believe that a different regex is needed, and that's where I am stuck.
I need the other files with the same names.
So the solution would be to get a final file name that includes more of the link such as "100004_20090326.pdf," "100006_20090326.pdf," etc.
Could you help me with the modifications to the code above that would produce this result?In reply to Question about WWW::Mechanize by palsy017
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |