in reply to Re: Re: Spam filtering regexp - keyword countermeasure countermeasure
in thread Spam filtering regexp - keyword countermeasure countermeasure
Combining tokens gives context. It allows you to differentiate to an extent between 'are you free tonight' and 'debt free' 'free widgets' etc.
If you just run on 'free' as a word you lose sensitivity as this is quite common. Using consecutive tokens is the way to go IMHO and is the method employed in voice recognition (Dragon used to use 2 words for context and IBM 3 I believe).
The price is size and speed based. To give you and idea our single word token file is ~ 100K, the two word token file is ~ 10MB and the three word token file > 1GB You will note a rough 2 order of magnitude increase in size as you add length to the phrases.
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: Re: Spam filtering regexp - keyword countermeasure countermeasure
by John M. Dlugosz (Monsignor) on May 15, 2003 at 19:45 UTC |