in reply to Large scale search and replace with perl -i
Given that most html files are usually (hopefully) < 1 MB in size, it would make sense to use Aristotle's technique of changing $/, but set it to null and slurp the whole file each time.
find . -name "*.html" -type f -print0 | \ perl -i -p0e \ 'BEGIN{ @ARGV = <STDIN>; chomp @ARGV; $/ = '' }; \ while (<>) { s/foo/bar/g; print }'
If the number of files produced by find is too many for your command line to handle, couldn't you produce a list of directories from find and pass that into perl and then let perl glob those? Something like (NB:completely untested code)
find . -type d -print0 | \ perl -i -p0e \ 'BEGIN{ @ARGV = <STDIN>; \ chomp @ARGV; \ @ARGV = map{glob "$_/*.html"}; \ $/ = '' }; \ while (<>) { s/foo/bar/g; print }'
Combining that with Merlyn's trick of backing out the -i effect if nothing is found should save more time.
|
|---|