如何在Naive Bayes nltk python中计算出最具信息性的特征百分比?

时间:2017-10-30 14:06:33

标签: python nlp nltk logistic-regression

当我们运行以下命令时,我们通常会得到以下结果: -

 classifier.show_most_informative_features(10)

结果:

Most Informative Features
             outstanding = 1                 pos : neg    =     13.9 : 1.0
               insulting = 1                 neg : pos    =     13.7 : 1.0
              vulnerable = 1                 pos : neg    =     13.0 : 1.0
               ludicrous = 1                 neg : pos    =     12.6 : 1.0
             uninvolving = 1                 neg : pos    =     12.3 : 1.0
              astounding = 1                 pos : neg    =     11.7 : 1.0

有谁知道13.9,13.7等是如何计算的?

此外,我们可以通过以下方法获得信息最丰富的功能classifier.show_most_informative_features(10)与Naive bayes但如果我们想要使用逻辑回归得到相同的结果,请有人建议如何获得它。我在stackoverflow上看到了一篇帖子,但这需要我不用来创建功能的向量。

classifier = nltk.NaiveBayesClassifier.train(train_set)
print("Original Naive bayes accuracy percent: ", nltk.classify.accuracy(classifier,dev_set)* 100)
classifier.show_most_informative_features(10)

LogisticRegression_classifier = SklearnClassifier(LogisticRegression())
LogisticRegression_classifier.train(train_set)
print("LogisticRegression  accuracy percent: ", nltk.classify.accuracy(LogisticRegression_classifier, dev_set)*100)

1 个答案:

答案 0 :(得分:2)

the Naive Bayes classifier in NLTK最具信息性的功能记录如下:

def most_informative_features(self, n=100):
    """
    Return a list of the 'most informative' features used by this
    classifier.  For the purpose of this function, the
    informativeness of a feature ``(fname,fval)`` is equal to the
    highest value of P(fname=fval|label), for any label, divided by
    the lowest value of P(fname=fval|label), for any label:
    |  max[ P(fname=fval|label1) / P(fname=fval|label2) ]
    """
    # The set of (fname, fval) pairs used by this classifier.
    features = set()
    # The max & min probability associated w/ each (fname, fval)
    # pair.  Maps (fname,fval) -> float.
    maxprob = defaultdict(lambda: 0.0)
    minprob = defaultdict(lambda: 1.0)

    for (label, fname), probdist in self._feature_probdist.items():
        for fval in probdist.samples():
            feature = (fname, fval)
            features.add(feature)
            p = probdist.prob(fval)
            maxprob[feature] = max(p, maxprob[feature])
            minprob[feature] = min(p, minprob[feature])
            if minprob[feature] == 0:
                features.discard(feature)

    # Convert features to a list, & sort it by how informative
    # features are.
    features = sorted(features,
                      key=lambda feature_:
                      minprob[feature_]/maxprob[feature_])
    return features[:n]

在二元分类('pos'vs'onc')的情况下,你的特征来自单字组(BOW)模型,most_informative_features()函数返回的“信息值”单词outstanding等于:

 p('outstanding'|'pos') / p('outstanding'|'neg')

该函数遍历所有特征(在unigram BoW模型的情况下,特征是单词),然后取出具有最高“信息值”的前100个单词。

给定标签的单词的概率是使用train() function的预期似然估计在ELEProbDist中计算的,LidstoneProbDist是一个http://serverIPAddress/dashboardName/对象,gamma }参数设置为0.5,它确实:

class LidstoneProbDist(ProbDistI):
    """
    The Lidstone estimate for the probability distribution of the
    experiment used to generate a frequency distribution.  The
    "Lidstone estimate" is parameterized by a real number *gamma*,
    which typically ranges from 0 to 1.  The Lidstone estimate
    approximates the probability of a sample with count *c* from an
    experiment with *N* outcomes and *B* bins as
    ``c+gamma)/(N+B*gamma)``.  This is equivalent to adding
    *gamma* to the count for each bin, and taking the maximum
    likelihood estimate of the resulting frequency distribution.
    """