xorl has asked for the wisdom of the Perl Monks concerning the following question:
I'm sure this is one of those things that has already been coded and is out there somewhere. I'm looking for something that given a URL will spider the site and store a copy of the site locally.
I have a pretty good idea of how to go about writing one. I'm just lazy. I figure if no one can find something out there, I can always take linklint and make it save the files after it checks it (is that a good idea?).
Thanks in advance.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: website archiver
by doom (Deacon) on Jan 13, 2009 at 04:31 UTC | |
|
Re: website archiver
by Arunbear (Prior) on Jan 13, 2009 at 07:26 UTC | |
|
Re: website archiver
by Anonymous Monk on Jan 13, 2009 at 07:24 UTC |