Fellow Monks,
Recently, I was involved in a discussion where I posted a certain Perl algorithm that I've used for quite a while for various things. I was under the impression that it was a reasonably well-designed way to tokenize a string, look up the tokens in a hash, and then replace the tokens with another one when needed. The code I have used is this:
$string =~ s/([^\s.\]\[]+)/{exists($tokens_to_match{lc $1}) ? "$tokens_to_match{lc $1}" : "$1"}/gei;My reasoning behind this code was as follows: since hash lookup is fairly constant, using a hash lookup would provide good scalability since it avoids repeated regexp compilation. However, it was countered that this solution was not efficient at all. The person replying claimed that the performance of this algorithm is worse than O(n^2) because the way I'm using s/// is inefficient.
I am hoping that some of you can provide guidance on this problem. Is there a better way to approach this problem than my current method?
In reply to Efficient string tokenization and substitution by jpfarmer
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |