Combining tokens gives context. It allows you to differentiate to an extent between 'are you free tonight' and 'debt free' 'free widgets' etc.
If you just run on 'free' as a word you lose sensitivity as this is quite common. Using consecutive tokens is the way to go IMHO and is the method employed in voice recognition (Dragon used to use 2 words for context and IBM 3 I believe).
The price is size and speed based. To give you and idea our single word token file is ~ 100K, the two word token file is ~ 10MB and the three word token file > 1GB You will note a rough 2 order of magnitude increase in size as you add length to the phrases.
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
In reply to Re: Re: Re: Spam filtering regexp - keyword countermeasure countermeasure
by tachyon
in thread Spam filtering regexp - keyword countermeasure countermeasure
by John M. Dlugosz
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |