我生成了自己的语料库,所以我分成了这样的训练文本文件:
POS|This film was awesome, highly recommended
NEG|I did not like this film
NEU|I went to the movies
POS|this film is very interesting, i liked a lot
NEG|the film was very boring i did not like it
NEU|the cinema is big
NEU|the cinema was dark
为了进行测试,我还有另一篇未经标记的文本评论:
I did not like this film
然后我执行以下操作:
import pandas as pd
from sklearn.feature_extraction.text import HashingVectorizer
trainingdata = pd.read_csv('/Users/user/Desktop/training.txt',
header=None, sep='|', names=['labels', 'movies_reviews'])
vect = HashingVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True, n_features=7)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
TestText= pd.read_csv('/Users/user/Desktop/testing.txt',
header=None, names=['test_opinions'])
test = vect.transform(TestText['test_opinions'])
from sklearn.svm import SVC
svm = SVC()
svm.fit(X, y)
prediction = svm.predict(test)
print prediction
预测是:
['NEU']
那么我想到的是为什么这个预测错了?这是代码问题还是功能或分类算法问题?我尝试使用此功能,当我从培训文本文件中删除最后一条评论时,我意识到始终预测该文件的最后一个元素。知道如何解决这个问题吗?。
答案 0 :(得分:1)
SVM对参数设置非常敏感。您需要进行网格搜索才能找到正确的值。我尝试在你的数据集上训练两种Naive Bayes,我在训练集上得到了完美的准确度:
from sklearn.naive_bayes import *
from sklearn.feature_extraction.text import *
# first option- Gaussian NB
vect = HashingVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
nb = GaussianNB().fit(X.A,y) # input needs to be dense
nb.predict(X.A) == y
# second option- MultinomialNB (input needs to be positive, use CountingVect instead)
vect = CountVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
nb = MultinomialNB().fit(X,y)
nb.predict(X.A) == y
在这两种情况下,输出都是
Out[33]:
0 True
1 True
2 True
3 True
4 True
5 True
6 True
Name: labels, dtype: bool