标记文本分类问题,预测错误?

时间:2014-12-30 23:13:59

标签: python machine-learning nlp scikit-learn nltk

我正在使用scikit-learn提供的不同分类器和矢量化器,所以让我说我有以下内容:

training = [["this was a good movie, 'POS'"],
      ["this was a bad movie, 'NEG'"],
      ["i went to the movies, 'NEU'"], 
      ["this movie was very exiting it was great, 'POS'"], 
      ["this is a boring film, 'NEG'"]
        ,........................,
          [" N-sentence, 'LABEL'"]]

#Where each element of the list is another list that have documents, then.

splitted = [#remove the tags from training]

from sklearn.feature_extraction.text import HashingVectorizer
X = HashingVectorizer(
    tokenizer=lambda  doc: doc, lowercase=False).fit_transform(splitted)

print X.toarray()

然后我有这个矢量表示:

[[ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]]

这个问题是我不知道我是否向右对齐语料库,然后:

#This is the test corpus:
test = ["I don't like this movie it sucks it doesn't liked me"]

#I vectorize the corpus with hashing vectorizer
Y = HashingVectorizer(
    tokenizer=lambda  doc: doc, lowercase=False).fit_transform(test)

然后我打印Y

[[ 0.  0.  0. ...,  0.  0.  0.]]

然后

y = [x[-1]for x in training]

#import SVM and classify
from sklearn.svm import SVC
svm = SVC()
svm.fit(X, y)
result = svm.predict(X)
print "\nThe opinion is:\n",result

问题就在这里,我得到了[NEG]的以下内容,这实际上是正确的预测:

["this was a good movie, 'POS'"]

我想我没有向右移动trainingy目标是错误的,任何人都可以帮助我理解发生了什么以及如何对training进行向量化以便有正确的预测吗?

1 个答案:

答案 0 :(得分:2)

我将留给您将训练数据转换为预期的格式:

training = ["this was a good movie",
            "this was a bad movie",
            "i went to the movies",
            "this movie was very exiting it was great", 
            "this is a boring film"]

labels = ['POS', 'NEG', 'NEU', 'POS', 'NEG']

功能提取

>>> from sklearn.feature_extraction.text import HashingVectorizer
>>> vect = HashingVectorizer(n_features=5, stop_words='english', non_negative=True)
>>> X_train = vect.fit_transform(training)
>>> X_train.toarray()
[[ 0.          0.70710678  0.          0.          0.70710678]
 [ 0.70710678  0.70710678  0.          0.          0.        ]
 [ 0.          0.          0.          0.          0.        ]
 [ 0.          0.89442719  0.          0.4472136   0.        ]
 [ 1.          0.          0.          0.          0.        ]]

使用更大的语料库,你应该增加n_features以避免碰撞,我使用5以便可以看到生成的矩阵。另请注意,我使用了stop_words='english',我认为只需要很少的例子就可以摆脱停用词,否则你可能会混淆分类器。

模特培训

from sklearn.svm import SVC

model = SVC()
model.fit(X_train, labels)

<强>预测

>>> test = ["I don't like this movie it sucks it doesn't liked me"]
>>> X_pred = vect.transform(test)
>>> model.predict(X_pred)
['NEG']

>>> test = ["I think it was a good movie"]
>>> X_pred = vect.transform(test)
>>> model.predict(X_pred)
['POS']

编辑:请注意,第一个测试示例的正确分类只是一个幸运的巧合,因为我没有看到任何可以从训练集中学到的单词为负面。在第二个例子中,单词good可能触发了正面分类。