用Python实例分类多项式朴素贝叶斯分类器

时间:2013-07-04 10:36:17

标签: python machine-learning classification nltk bayesian

我正在寻找一个关于如何运行Multinomial朴素贝叶斯分类器的简单示例。我从StackOverflow中看到了这个例子:

Implementing Bag-of-Words Naive-Bayes classifier in NLTK

import numpy as np
from nltk.probability import FreqDist
from nltk.classify import SklearnClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline

pipeline = Pipeline([('tfidf', TfidfTransformer()),
                     ('chi2', SelectKBest(chi2, k=1000)),
                     ('nb', MultinomialNB())])
classif = SklearnClassifier(pipeline)

from nltk.corpus import movie_reviews
pos = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('pos')]
neg = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('neg')]
add_label = lambda lst, lab: [(x, lab) for x in lst]
#Original code from thread:
#classif.train(add_label(pos[:100], 'pos') + add_label(neg[:100], 'neg'))
classif.train(add_label(pos, 'pos') + add_label(neg, 'neg'))#Made changes here

#Original code from thread:    
#l_pos = np.array(classif.batch_classify(pos[100:]))
#l_neg = np.array(classif.batch_classify(neg[100:]))
l_pos = np.array(classif.batch_classify(pos))#Made changes here
l_neg = np.array(classif.batch_classify(neg))#Made changes here
print "Confusion matrix:\n%d\t%d\n%d\t%d" % (
          (l_pos == 'pos').sum(), (l_pos == 'neg').sum(),
          (l_neg == 'pos').sum(), (l_neg == 'neg').sum())

运行此示例后,我收到了警告。

C:\Python27\lib\site-packages\scikit_learn-0.13.1-py2.7-win32.egg\sklearn\feature_selection\univariate_selection.py:327: 
UserWarning: Duplicate scores. Result may depend on feature ordering.There are probably duplicate features, 
or you used a classification score for a regression task.
warn("Duplicate scores. Result may depend on feature ordering."

Confusion matrix:
876 124
63  937

所以,我的问题是......

  1. 有谁能告诉我这个错误信息是什么意思?
  2. 我对原始代码进行了一些更改,但为什么混淆矩阵的结果比原始代码中的结果要高得多呢?
  3. 如何测试此分类器的准确性?

2 个答案:

答案 0 :(得分:2)

原始代码训练前100个正面和负面的例子,然后对剩余部分进行分类。您已经删除了边界并在训练和分类阶段使用了每个示例,换句话说,您有重复的功能。要解决此问题,请将数据集拆分为两组,训练和测试。

混淆矩阵更高(或不同),因为您正在训练不同的数据。

混淆矩阵是衡量准确度的指标,显示误报的数量等。阅读更多内容:http://en.wikipedia.org/wiki/Confusion_matrix

答案 1 :(得分:1)

我使用的原始代码仅包含训练集的前100个条目,但仍然有警告。我的输出是:

In [6]: %run testclassifier.py
C:\Users\..\AppData\Local\Enthought\Canopy\User\lib\site-packages\sklearn\feature_selection\univariate_selecti
on.py:319: UserWarning: Duplicate scores. Result may depend on feature ordering.There are probably duplicate features, o
r you used a classification score for a regression task.
  warn("Duplicate scores. Result may depend on feature ordering."
Confusion matrix:
427     473
132     768