不平衡数据的交叉验证和文本分类

时间:2018-06-06 05:06:25

标签: python-3.x machine-learning svm pipeline cross-validation

我是NLP的新手,我正在尝试构建一个文本分类器,但我的数据目前是不平衡的。最高类别有多达280个条目,而最低类别多达30个。 我正在尝试对当前数据使用交叉验证技术,但在寻找几天之后我现在无法实现它。它看起来非常简单,但我仍然无法实现它。这是我的代码

y = resample.Subsystem
X = resample['new description']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
#SVM
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
text_clf_svm = Pipeline([('vect', CountVectorizer(stop_words='english')),('tfidf', TfidfTransformer()),('clf-svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, n_iter=5, random_state=42)),])
text_clf_svm.fit(X_train, y_train)
predicted_svm = text_clf_svm.predict(X_test)
print('The best accuracy is : ',np.mean(predicted_svm == y_test))

我已经完成了一些gridsearch和Stemmer,但是现在我会在这个代码上进行交叉验证。我已经很好地清理了数据,但我仍然得到了60%的准确率 任何帮助将不胜感激

2 个答案:

答案 0 :(得分:0)

尝试进行过采样或欠采样。由于数据高度不平衡,对于具有更多数据点的班级存在更多偏见。在过采样/欠采样之后,偏差将会非常小,精度会提高。

而不是SVM,您可以使用MLP。即使数据不平衡,它也能提供良好的结果。

答案 1 :(得分:0)

from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=5, random_state=None)
# X is the feature set and y is the target
from sklearn.model_selection import RepeatedKFold 
kf = RepeatedKFold(n_splits=20, n_repeats=10, random_state=None) 

for train_index, test_index in kf.split(X):
  #print("Train:", train_index, "Validation:",test_index)
  X_train, X_test = X[train_index], X[test_index] 
  y_train, y_test = y[train_index], y[test_index]