Thank you @Cheers Rolf, so kindly monk.
I have questions:
1. The name have Metacharacters('.', '$' or something like), for example: ke$ha, d.b.cooper, Tim Turner (I)...
2. It is a large quantity of the names, more than 214132. So too slow, from my personal point.
3. Only use @matches = ( $input =~ /($regex)/g ), we can not distinguish the ambiguous name: Alex Fong / Fong, 周杰/周杰伦, 信/方中信...
PS:
For pure chinese string, I use Lingua::ZH::WordSegment by custom dictionary, it works OK.
But other languages or mix with chinese, I can not find a way that ensure the tokens/trunks list in the dictionary NOT been split.
In reply to Re^2: How to tokenize string by custom dictionary? (+code)
by infantcoder
in thread How to tokenize string by custom dictionary?
by infantcoder
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |