ghenry has asked for the wisdom of the Perl Monks concerning the following question:
Dear all,
I have a two part program that sits in the hooks folder of a subversion repo. After a commit, the first part needs to checkout, or download the latest version either with the svn command or via a standard subversion apache browse. The second part uploads the newly grabbed files to aberdeen.pm.org via perldav.
The second part is done.
The first part, I create a temporary folder with File::Temp and then move on. Should I use LWP for recursive downloads? I don't really want to have a load of system commands etc.
What module/feature have I blindly missed?
Thanks,
Gavin.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Best way to recursively grab a website
by gjb (Vicar) on Mar 29, 2005 at 11:53 UTC | |
by ghenry (Vicar) on Mar 29, 2005 at 12:21 UTC | |
|
Re: Best way to recursively grab a website
by tlm (Prior) on Mar 29, 2005 at 10:52 UTC | |
by ghenry (Vicar) on Mar 29, 2005 at 10:56 UTC | |
|
Re: Best way to recursively grab a website
by webchalkboard (Scribe) on Mar 29, 2005 at 10:45 UTC | |
by ghenry (Vicar) on Mar 29, 2005 at 10:54 UTC | |
by inman (Curate) on Mar 29, 2005 at 11:04 UTC | |
by b10m (Vicar) on Mar 29, 2005 at 12:26 UTC |