Home » Uncategorized

The Naive Bayes Classifier explained

Reading the academic literature Text Analytics seems difficult. However, applying it in practice has shown us that Text Classification is much easier than it looks. Most of the Classifiers consist of only a few lines of code.In this three-part blog series we will examine the three well-known Classifiers; the Naive Bayes, Maximum Entropy and Support Vector Machines. From the introductionary blog we know that the Naive Bayes Classifier is based on the bag-of-words model.

With the bag-of-words model we check which word of the text-document appears in a positive-words-list or a negative-words-list. If the word appears in a positive-words-list the total score of the text is updated with +1 and vice versa. If at the end the total score is positive, the text is classified as positive and if it is negative, the text is classified as negative. Simple enough!

With the Naive Bayes model, we do not take only a small set of positive and negative words into account, but all words the NB Classifier was trained with, i.e. all words presents in the training set. If a word has not appeared in the training set, we have no data available and apply Laplacian smoothing (use 1 instead of the conditional probability of the word). The probability a document belongs to a class C is given by the class probability P(C) multiplied by the products of the conditional probabilities of each word  for that class.

P = P(C) \cdot \prod_i P(d_i|C) = P(C) \cdot \prod_i^n \frac{count(d_i, C)}{\sum_i count(d_i, C)}

= P(C) \cdot \prod_i \frac{count(d_i, C)}{V_C}

Here  count(d_i, C) is the number of occurences of word d_i in class C , V_C is the total number of words in class C and n is the number of words in the document we are currently classifying.
V_C does not change (unless the training set is expanded), so it can be placed outside of the product:

P = \frac{P(C)}{V_C^n} \cdot \prod_i^n count(d_i, C)

Handling large training sets:

In theory we want a training set as large as possible, since that will increase the accuracy. In practice this results in large numbers for V_C and n. For example, for our training set with 5000 reviews we got

V_{neg} = 90.346
V_{neu} = 109.287
V_{pos} = 459.582

Taking the n-th power of such a large number, will definitely result in computational problems, so we should normalize it. We can divide it by a number so that it becomes a number close to 1. In our case, this number is 100.000 and this normalization results in:

V_{neg,norm} = 0,90346
V_{neu,norm} = 1,09287
V_{pos,norm} = 4,59582

However, if the number of words in the document n  is large, this can still lead to computational problems:

>>> 4.59**500
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
>>>

In Python there are a few modules which can handle large number (like Decimal), but a better solution would be to take the logarithm. This will not affect the outcome of the classification process; if a document has the highest probability for a specific class, the logarithm of the probabilities will also be the highest for that class.

This results in:

log(P) = log( \frac{P(C)}{V_C^n} \cdot \prod_i^n count(d_i, C) )
= log(\frac{P(C)}{V_C^n}) + log(\prod_i^n count(d_i, C))
= log(P(C)) - n log(V_C) + \sum log(count(d_i, C))

Implementing Naive Bayes

With this information it is easy to implement a Naive Bayes Classifier algorithm.
Our training set consists in the form of a list of tuples, where each tuple contains two elements; the tokenized text and the label.

Check out the rest of the first blog-post about the Naive Bayes Classifier