in reply to Efficient string tokenization and substitution
I can vaguely imagine where that claim came from, but I don't know enough about the implementation of s/// to make any statements.
You might want to benchmark against solutions which first tokenize the string, then look up translations for the token, and finally assemble a new string. The two approaches which suggest themselves are splitting the string and iterating over the list, and walking across the string with a regex to collect match offsets and lengths.
Make sure you benchmark on greately varied sets of data (long and short input strings, long and short tokens, many or few successful translations, lots or little data in the hash; there are a lot of constellations to consider).
Makeshifts last the longest.
|
|---|