in reply to Re^2: (OT) Fighting spam
in thread (OT) Fighting spam

In fact, it is better not to put them in the "correct" bucket, because as Paul Graham noted, where a spammer may try to subvert rule based filters with "vi.agra" instead of "viagra", the former will get marked as a 100% indicator for spam, where the latter might have been innocent.

The problem with this is, there are too many ways to mangle a word such as "viagra". I've seen fifty or so variations already.

Just a quick regex-based scrape:
VIagra V.i.a.gra Vi@gra V i a g r a V1AGR@ V1agra VI.A.G.R.A VIAGR@ Viagra Viagra viagr@ VlAGR@ V.l.A.G.R.A V_iagra vi.a.g.r.a Vi-agra V I A G R A V1AGRA VViagra V.iagra Viagr a V.I.A.G.R.A Via-gra Vviagra Viagara VlAGRA Vi@gr@ V-i-@-g-r-a V.IAGRA V1@GRA Viagraa Via.gra Viagrra viagra VIAGRA Viagr@ Viagra V%iagra V|agr@ V,I,A,G,R,A V.I,A.G,R.A V iagra Viagr*a Vi^agra V'1'a'g'r'a Viagraaaaa Via.graa V-i-a-g-r-a Vi.agra v-i-a-g''r''a V'l'a'g'r'a Viagr.a vit&agra
(And this missed all the ones that spelled the v as "\/" or the a as "/\", all the ones that used entities to obscure a letter other than i (which gets picked up because the entity contains a 1 by coincidence), and probably some others.

This is basic arithmetic: if there are four ways to do v, four ways to do a, eight ways to do i, seven places to add extra character(s), and a large number of different combinations of extra characters that can be added (any combination of punctuation, for example; I've also seen "creme" on the end, and I'm sure there are other possibilities), that makes 4*4*8*7*n different ways to spell the word, where n is a large number. Repeat for other popular drugs (vicodin gets spelled even more creatively, for example). Add to this the threshhold on how many times a word has to occur to be interesting, and just the order-prescription-drugs spammers alone will be sending you several *million* messages before your naive bayesian filters become effective.

This is only true for the serious hardcore mutating spam, the stuff that's always sent from Asia so as to be utterly untraceable, the stuff that gets a whole new subnet every month or so, the stuff that mutates every single aspect of the headers with just about every single message. However, since that stuff is most of the spam I get...

The only thing that's consistent about this stuff is that the IP address from which it's sent never EVER has a PTR record in in-addr.arpa space. If I ran my own mail server, the first thing I would want to implement is a ticket-verification scheme for messages sent from hosts without proper reverse DNS. 99% of the legit mail comes from a host with a proper PTR record, and that mail would be undelayed. The rest would go through one of those one-time verification systems wherein each sender would have to respond once to a verification probe and then would be whitelisted. (Of course, if everyone did this the scumbags would probably arrange to be a domain registrar so that it would cost them little or nothing to burn a domain for each batch of spam...)

See, this is the problem with Paul Graham's approach: the spammers are busy thinking about circumvention, an issue that he ignores completely. If we want to stop spammers from getting through our filters, we're going to have to be more thorough about our approach, in terms of predicting and preventing simple attacks. Naive bayesian filtering eats flaming death when the spammers switch from plain language to euphemism and throw in some Markov chains (thirty-year-old technology). I predicted this within five minutes after I read Paul Graham's original article on the topic. Sure enough, when I tried out ifile (seeded with thousands of messages in each category), it was maybe 75% effective, making errors in both directions -- useless. It was admittedly very good at filtering out the simplistic spam, especially things like 419 spam, but if failed miserably on the hard stuff. A simple technique is not going to solve the matter. The spammers combine techniques. Lots of techniques. We need to combine techniques as well. We need to apply regex technology, so that "moster rod" and "M0n-stur R0>" are the same phrase or at least considered very similar, and then we need to look at not just individual words but phrases, combinations of certain words together in close proximity to one another, and so forth, so that "M0n-stur R0>" scores as a close match to "Turn your rod into a monster." (Yeah, more CPU time. So be it. CPU time is cheaper than my time and cheaper than my bandwidth, too.) In short, our filters need to be less naive, need to combine various techniques. Can bayesian analysis help? Sure. Can it do the job by itself? No. Can regular expressions do the job? No. But they can help...


$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}} split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/

Replies are listed 'Best First'.
Re^2: (OT) Fighting spam (naive, but not *that* naive)
by Aristotle (Chancellor) on Nov 17, 2003 at 19:36 UTC

    You may think about Paul Graham whatever you want, but he's not that stupid. I am continuously surprised that people don't seem to get how and why Bayesian filtering is works so effectively for old-fashioned (more on that in a bit) spam.

    Let me ask once more: how likely do you deem "M0n-stur" to be in legitimate mail? How likely is it in spam? And what is the ratio of these probabilities? Now, how likely is "Monster" to be in legitimate mail? How likely is it in spam? And what is the ratio of these probabilities?

    Result: "M0n-stur" only appears in mails that are spam. "Monster" appears in mail that is probably around 30-80% spam, depending on your specific mail traffic. This means you do not want to map the variation back to "monster". The presence of a variation is almost a dead give-away of spam.

    This is why naive Bayesian filtering works as well as it does for spam so far, despite being naive.

    This extreme effectivity of Bayesian filters against obfuscated variations of keywords has prompted spammers to move on beyond variations. They are now circumphrasing, and not mentioning viagra, monster rods or whatever it is they're advertising at all.

    I am now occasionally getting mail along the lines of

    Subject: I never thought I'd see better days

    I was really in a bind until I found this, and now I can even afford to live carelessly. Believe me, it works.

    There is absolutely nothing in there that any kind of content based filter could pick out, unless it were to actually understand the message.

    This is why content based filtering is a dead end. Most of the things you describe will only fool rule based filters; statistical filters, a family of which Bayes is just one member, will pick them up reliably. But they cannot comprehend the message; hence spam such as what I outlined above, and which tachyon and Andy Lester observed as well.

    Makeshifts last the longest.

      Result: "M0n-stur" only appears in mails that are spam. "Monster" appears in mail that is probably around 30-80% spam, depending on your specific mail traffic. This means you do not want to map the variation back to "monster". The presence of a variation is almost a dead give-away of spam.
      Certainly, you don't want to treat "M0n-stur" as equivalent to "Monster", because it's an obfuscated variation. The point of the original poster is that naive Baysian filtering cannot keep up with all the possible variations of "|V|()|\|STER" whereas a good regex can "see" they're equivalent.

      This is why naive Bayesian filtering works as well as it does for spam so far, despite being naive.
      Only if all spammers use the exact same obfuscated text variants of spam keywords. Since there are literally millions of variants, and spammers are now actively trying to defeat Baysian filtering, that seems unlikely.

      A good regex can, first of all, tell you "this is obfuscated text" which is enough to flag the mail as spam without figuring out what the text is supposed to be. Going further and finding the word which is being obfuscated could help more, and let a string like "|_33t |-|ax0R" pass as a probable joke, while having no tolerance for "\/|AG.RA" at all.

      Your last point about stealth spam seems to be on target. However, it points out how we need many tools in our spam-fighting arsenal. One type of content filtering won't do it; and content filtering alone won't do it either. Hopefully we'll have some new weapons soon.

      Let me ask once more: how likely do you deem "M0n-stur" to be in legitimate mail?

      Zero, of course. So what? A naive bayesian filter doesn't *know* that, until you've seen it already in some minimum number of spam messages -- by which time, the spammer has gone on to spell it some other way.

      The presence of a variation is almost a dead give-away of spam.

      Yes, but the computer isn't smart enough (*certainly* naive bayesian filters aren't smart enough) to know which spelling is correct. We can introduce larger and larger dictionaries, but one interesting thing about English is, regardless of how ginormous you make the dictionary there will be many perfectly cromulent words that are non-extant, and in any case how would you program your filter to distinguish innocent misspellings (if, for example, I had written "cromelent" above) from deliberately evasive ones ("monstur", "monstir", "mawnsteur", ...)?

      Can the filters be made smart enough to know that "/\/\0N-STAR" is probably a deliberate misspelling? Yes, probably, assuming you don't get much legitimate mail that uses 1337 5P33|< -- and that was my point, or a large part of it. It's not good enough to treat "M()n5terr" as a new word that's never been seen before, filling my inbox with each new variation. Ideally the filter ought to figure out that it's a mangled form of the word "monster", but failing that it at least needs to be treated as a member of a class of words that match a known pattern. The latter is easier than the former, because all it requires is character classes. Actually figuring out which word is the unmangled original requires a very large and continuously updated dictionary, among other things. Though that would be a good thing to work toward, certainly, but the patterns based on the character classes can be built automatically from an existing corpus of mail (given, only, the character equivalence classes, and allowing for some "characters" to be more than one character long, e.g., "/\/\"); whereas, the dictionary would have to be hand-built almost entirely.

      Furthermore, it's not good enough to assign some probability to "a variation of 'monster'" and be done. I want "monster", or any variation of monster, to have a much higher spam probability when it occurs in close proximity to "rod", or any variation thereof. This gets much more CPU-intensive, of course, but as noted, CPU time is cheaper than bandwidth and much cheaper than hiring a human to filter my mail.


      $;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}} split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/