lokiloki has asked for the wisdom of the Perl Monks concerning the following question:
http://search.cpan.org/~kwilliams/AI-Categorizer/
I don't understand something. I need your help.
This module has a Naive Bayes implementation. I tested this by training on about 1000 "ham" messages and 200 "spam" messages.
I then examined a number of test documents. I was interested in examining each WORD in those test documents to find those words that are most "spammy" and those that are most "hammy". So I iterated over an entire document and spit out the best_category and score for each word.
Strangely, many words which are very spam-like were assigned to the "ham" category and had high ham scores.
So, I then altered things and trained on equal numbers of spam and ham documents: 200 for each. When I retested, the spam-like words were now more likely to be labeled as spam.
I guess I don't understand...
Do these Naive Bayes filters require you to train on a corpus where the categories are exactly even in terms of documents examined? Or must I train on a corpus where the ratio of spam/ham matches the true rate of spam to ham that I receive in my email box?
I am thinking about a text classification problem beyond spam/ham where I only have fragmentary documents for one category, and many more docs for the other category. But, in a test scenario, the examined documents will be about 50/50... So I am unsure how to train a Bayes filter given the unequal cats in a corpus. I am not a Math person, so I have trouble understanding the underpinnings of all of this.
I want the Bayesian filter to assume that there is a 50/50 chance that a document is A or B. But in terms of training, I only have a handful of A documents, and many B documents.
Can you help?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Understanding Naive Bayesian classifiers
by Anonymous Monk on Aug 03, 2008 at 02:28 UTC | |
by apl (Monsignor) on Aug 03, 2008 at 14:05 UTC |