in reply to Spam filtering regexp - keyword countermeasure countermeasure

This discussion of Baysian spam filtering should be of use http://www.paulgraham.com/spam.html

We run a Baysian web filter that uses phrases. A phrase is a 1, 2 or 3 word token. The tokens are contigous character sets (generally words) but also URL domain links and some other bits and bobs. We use phrases rather than single words as this adds context.

Everything is automated, we don't add anything by hand. The system is designed to work hands free. The data sets used to generate these were large (ie 50,000) and the DB (with all single instances removed) covers about 1/2 a million such token phrases. As you get more data you simply re-run the phrase/probability generator which will then take account of new phrases that are commonly appearing in your target content.

We use Math::BigFloat as if you use too many phrases your probabilities blow off the ends of the floating point accuracy. Optimal range for tokens to do the Baysian on is (in our testing) 8-20 depending on the data set, if you don't use Big::Float. We pick the phrases that offer the greatest differential (ie good-bad probability difference). Accuracy can be very impressive. We run a sensitivity of > 99.7% and a specificity of 99.9% for porn for example.

The probabilities tend to move rapidly either towards 0 or one. A probability level > 0.9 works well in practice but even > 0.5 is still remarkably accurate.

cheers

tachyon

s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

  • Comment on Re: Spam filtering regexp - keyword countermeasure countermeasure

Replies are listed 'Best First'.
Re: Re: Spam filtering regexp - keyword countermeasure countermeasure
by John M. Dlugosz (Monsignor) on May 13, 2003 at 21:55 UTC
    Hmm, one reply sais to use individual chars, and you use groups of words. I can see how that would work, in that D,E,B and E,B,T are both 3-token groups that will be found.

    So I get the feeling that using Baysian analysis on single whole words (e.g. POPFile) is the worst way to do it!

    My idea is to add more "context" then POPFile can gleem by itself, by adding special keywords when the preliminary filter spots things.

      Combining tokens gives context. It allows you to differentiate to an extent between 'are you free tonight' and 'debt free' 'free widgets' etc.

      If you just run on 'free' as a word you lose sensitivity as this is quite common. Using consecutive tokens is the way to go IMHO and is the method employed in voice recognition (Dragon used to use 2 words for context and IBM 3 I believe).

      The price is size and speed based. To give you and idea our single word token file is ~ 100K, the two word token file is ~ 10MB and the three word token file > 1GB You will note a rough 2 order of magnitude increase in size as you add length to the phrases.

      cheers

      tachyon

      s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

        I suppose that size and speed is an issue for a lot of folks looking for a client-side filter. My machine has cycles to burn while I'm not home and spam keeps tricking in, so it will be all finished analysing when I get around to checking the in-box. So for someone who auto-checks the POP all day, a more server-like solution is feasable. 1GB of disk space is nothing... but weighting down my machine with a background task while I'm doing something interactivly is a bigger deal.