我正在使用NLTK进行情绪分析,将内置语料库movie_reviews
用于培训,每次我得到neg
作为结果。
我的代码:
import nltk
import random
import pickle
from nltk.corpus import movie_reviews
from os.path import exists
from nltk.classify import apply_features
from nltk.tokenize import word_tokenize, sent_tokenize
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())
print(word_features)
def find_features(document):
words = set(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
numtrain = int(len(documents) * 90 / 100)
training_set = apply_features(find_features, documents[:numtrain])
testing_set = apply_features(find_features, documents[numtrain:])
classifier = nltk.NaiveBayesClassifier.train(training_set)
classifier.show_most_informative_features(15)
Example_Text = " avoids annual conveys vocal thematic doubts fascination slip avoids outstanding thematic astounding seamless"
doc = word_tokenize(Example_Text.lower())
featurized_doc = {i:(i in doc) for i in word_features}
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)
我在这里使用NaiveBayes Classifier
我正在使用movie_reviews
语料库训练数据,然后使用这个训练有素的分类器来测试我的Example_test
的情绪。
现在你可以看到我的Example_Text
,它有一些随机的单词。当我classifier.show_most_informative_features(15)
时,它给出了一个15个单词的列表,其中正面或负面的比例最高。我选择了这个列表中显示的正面词语。
Most Informative Features
avoids = True pos : neg = 12.1 : 1.0
insulting = True neg : pos = 10.8 : 1.0
atrocious = True neg : pos = 10.6 : 1.0
outstanding = True pos : neg = 10.2 : 1.0
seamless = True pos : neg = 10.1 : 1.0
thematic = True pos : neg = 10.1 : 1.0
astounding = True pos : neg = 10.1 : 1.0
3000 = True neg : pos = 9.9 : 1.0
hudson = True neg : pos = 9.9 : 1.0
ludicrous = True neg : pos = 9.8 : 1.0
dread = True pos : neg = 9.5 : 1.0
vocal = True pos : neg = 9.5 : 1.0
conveys = True pos : neg = 9.5 : 1.0
annual = True pos : neg = 9.5 : 1.0
slip = True pos : neg = 9.5 : 1.0
那么为什么我不能得到pos
,为什么即使分类器经过适当的训练,我总是得到neg
?
答案 0 :(得分:2)
问题在于你将所有单词都包含在内,而单词'word:False'的功能会产生很多额外的噪音,这些噪音会淹没这些正面特征。我查看了两个日志概率,它们非常相似:-812 vs -808。在这类问题中,通常只使用单词:真正的样式功能,因为所有其他功能只会增加噪音。
我复制了你的代码,但修改了最后三行,如下所示:
featurized_doc = {c:True for c in Example_Text.split()}
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)
并获得输出'pos'