kevin4truth has asked for the wisdom of the Perl Monks concerning the following question:
I have a Web site that uses PHP to query up pages for display. A typical URL maybe something like: http://www.mysite.org/?page=some_html_page
I want to create a Perl script that copies the Web site content to a temp directory structure; then the script will go through all the URLs inside the many HTML files in that temp dir and change the above URL pattern to be a relative link like "/pages/some_html_page.htm" ; then save the file and move onto the next file in the directory until all the files contain relative URL links.
The goal is to create a localized version (tar.zip) of the Web site so people can download it, unzip and browse locally on their computer without having a local Web server installed.
I used Perl many years ago, so there's lots of rust to break off with me.
Can someone please point me in the right direction as far as possible pre-existing code or something I can build on to get started?
Thanks Much,Kevin
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Need direction on mass find/replacement in HTML files.
by WizardOfUz (Friar) on Apr 20, 2010 at 17:03 UTC | |
by kevin4truth (Initiate) on Apr 20, 2010 at 18:15 UTC | |
Re: Need direction on mass find/replacement in HTML files.
by starX (Chaplain) on Apr 20, 2010 at 18:42 UTC | |
by kevin4truth (Initiate) on Apr 29, 2010 at 15:36 UTC | |
by wfsp (Abbot) on Apr 29, 2010 at 16:58 UTC | |
by choroba (Cardinal) on Apr 29, 2010 at 16:11 UTC |