This is used by setting up a replacement hash with keys that are tags for replacements and the values are the desired replacements -- the text with the tags is typically slurped from a template file into an array.sub replace (\%\@) { my ($repl_ref, $text_ref) = @_; my $repl_str = join '|', (keys %$repl_ref); for (@$text_ref) { s/($repl_str)/$$repl_ref{$1}/g; } }
Anyway, here's my question: Is this dog slow? Is there a way to make this more efficient?
I've recently run into a situation where I have text from a Mac source with European characters that I need to play with on a windows machine, and I use the same code above with the replacement hash having the single "wrong" characters as keys and the "right" characters (using maps grabbed from unicode.org) as values. As one might imagine, this results in some waiting for files to process.
Any commentary is most appreciated...
In reply to Faster search and replace? by snax
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |