palsy017 has asked for the wisdom of the Perl Monks concerning the following question:

If you could help me on this Mech issue, I sure would appreciate it. I am trying to retrieve PDFs from a website. I am using WWW::Mechanize as follows:
my $start = "http://www.xxxx.com/programs_results/"; my $mech = WWW::Mechanize->new( autocheck => 1 ); $mech->get( $start ); my @links = $mech->find_all_links( url_regex => qr/\d+.+\.pdf$/ ); chdir $progdest or die "Can't change directory: $!\n"; for my $link ( @links ) { my $url = $link->url_abs; my $filename = $url; $filename =~ s[^.+/][]; mech->get( $url, ':content_file' => $filename ); }
This code works to download PDFs, but the files on the website have duplicate names, differentiated only by their links.

The pdf names are downloaded as "_20090326.pdf."

The links from the source are in the following pattern: /pdf/100004/_20090326.pdf, /pdf/100006/_20090326.pdf etc.

So not all files are saved since some are overwritten. I believe that a different regex is needed, and that's where I am stuck.

I need the other files with the same names.

So the solution would be to get a final file name that includes more of the link such as "100004_20090326.pdf," "100006_20090326.pdf," etc.

Could you help me with the modifications to the code above that would produce this result?

Replies are listed 'Best First'.
Re: Question about WWW::Mechanize
by AnomalousMonk (Archbishop) on Mar 30, 2009 at 02:43 UTC
    Something like this might serve:
    >perl -wMstrict -le "print q{output:}; for my $url (@ARGV) { (my $filename = $url) =~ s{ .* / (\d+) / (_ \d+ \. pdf) \z } {$1\L$2}xmsi; print qq{url: $url -> filename: $filename}; } " /pdf/100004/_20090326.pdf /pdf/100006/_20090326.pdf /foo/bar/pdf/123/_456.PdF output: url: /pdf/100004/_20090326.pdf -> filename: 100004_20090326.pdf url: /pdf/100006/_20090326.pdf -> filename: 100006_20090326.pdf url: /foo/bar/pdf/123/_456.PdF -> filename: 123_456.pdf
    Or a slightly simpler version using split (output still lowercased):
    >perl -wMstrict -le "print q{output:}; for my $url (@ARGV) { my $filename = lc join '', (split '/', $url)[-2, -1]; print qq{url: $url -> filename: $filename}; } " /pdf/100004/_20090326.pdf /pdf/100006/_20090326.pdf /foo/bar/pdf/123/_456.PdF output: url: /pdf/100004/_20090326.pdf -> filename: 100004_20090326.pdf url: /pdf/100006/_20090326.pdf -> filename: 100006_20090326.pdf url: /foo/bar/pdf/123/_456.PdF -> filename: 123_456.pdf
    Updates:
    1. Added lowercasing ("\L") to  s/// version replacement string.
    2. Added  split version.
Re: Question about WWW::Mechanize
by Anonymous Monk on Mar 30, 2009 at 02:34 UTC
    You need an encoding, like
    use URI::Escape; my $filename = URI::Escape::uri_escape( $link->url_abs->path ); # /pdf/100004/_20090326.pdf # %2Fpdf%2F100004%2F_20090326.pdf
      use URI::Escape; my $filename = URI::Escape::uri_escape( $link->url_abs->path );
      This line of PERL (listed above) works, but I need to make a change. The PDFs are saving with the encoding in the file name. How could this be filtered so that the end result looks like A, not B?

      A. 100004_20090326.pdf

      B. %2Fpdf%2F100004%2F_20090326.pdf

      Thanks alot!!!