in reply to Spam filtering regexp - keyword countermeasure countermeasure
This discussion of Baysian spam filtering should be of use http://www.paulgraham.com/spam.html
We run a Baysian web filter that uses phrases. A phrase is a 1, 2 or 3 word token. The tokens are contigous character sets (generally words) but also URL domain links and some other bits and bobs. We use phrases rather than single words as this adds context.
Everything is automated, we don't add anything by hand. The system is designed to work hands free. The data sets used to generate these were large (ie 50,000) and the DB (with all single instances removed) covers about 1/2 a million such token phrases. As you get more data you simply re-run the phrase/probability generator which will then take account of new phrases that are commonly appearing in your target content.
We use Math::BigFloat as if you use too many phrases your probabilities blow off the ends of the floating point accuracy. Optimal range for tokens to do the Baysian on is (in our testing) 8-20 depending on the data set, if you don't use Big::Float. We pick the phrases that offer the greatest differential (ie good-bad probability difference). Accuracy can be very impressive. We run a sensitivity of > 99.7% and a specificity of 99.9% for porn for example.
The probabilities tend to move rapidly either towards 0 or one. A probability level > 0.9 works well in practice but even > 0.5 is still remarkably accurate.
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Spam filtering regexp - keyword countermeasure countermeasure
by John M. Dlugosz (Monsignor) on May 13, 2003 at 21:55 UTC | |
by tachyon (Chancellor) on May 15, 2003 at 01:33 UTC | |
by John M. Dlugosz (Monsignor) on May 15, 2003 at 19:45 UTC |