如何通过交叉验证来测试看不见的测试数据并预测标签?

时间:2020-03-09 14:14:46

标签: python-3.x pandas scikit-learn sklearn-pandas

1。包含数据(即文本描述)以及分类标签的CSV

df = pd.read_csv('./output/csv_sanitized_16_.csv', dtype=str)
X = df['description_plus']
y = df['category_id']

2。此CSV包含看不见的数据(即文本说明),需要为其预测标签

df_2 = pd.read_csv('./output/csv_sanitized_2.csv', dtype=str)
X2 = df_2['description_plus']

对上面的训练数据(项目1)进行操作的交叉验证功能。

def cross_val():
    cv = KFold(n_splits=20)
    vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
                                     stop_words='english')
    X_train = vectorizer.fit_transform(X) 
    clf = make_pipeline(preprocessing.StandardScaler(with_mean=False), svm.SVC(C=1))
    scores = cross_val_score(clf, X_train, y, cv=cv)
    print(scores)
    print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
cross_val()

我需要知道如何将看不见的数据(项目2)传递给交叉验证功能以及如何预测标签?

1 个答案:

答案 0 :(得分:0)

使用scores = cross_val_score(clf, X_train, y, cv=cv)只能获得模型的交叉验证得分。 cross_val_score将根据cv参数在内部将数据分为训练和测试。

因此,您获得的值是SVC的交叉验证准确性。

要获得看不见的数据的分数,您可以先拟合模型,例如

clf = make_pipeline(preprocessing.StandardScaler(with_mean=False), svm.SVC(C=1))
clf.fit(X_train, y) # the model is trained now

然后执行clf.score(X_unseen,y)

最后一个将在看不见的数据上返回模型的准确性。


编辑:执行所需操作的最佳方法是使用GridSearch,首先使用训练数据找到最佳模型,然后使用看不见的(测试)数据评估最佳模型:

from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score

# load some data
iris = datasets.load_iris()
X, y = iris.data, iris.target

#split data to training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)

# hyperparameter tunig of the SVC model
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svc = svm.SVC()

# fit the GridSearch using the TRAINING data
grid_searcher = GridSearchCV(svc, parameters)
grid_searcher.fit(X_train, y_train)

#recover the best estimator (best parameters for the SVC, based on the GridSearch)
best_SVC_model = grid_searcher.best_estimator_

# Now, check how this best model behaves on the test set
cv_scores_on_unseen = cross_val_score(best_SVC_model, X_test, y_test, cv=5)
print(cv_scores_on_unseen.mean())