如何使用K折交叉验证来计算准确性和混淆矩阵?

时间:2018-06-27 20:11:10

标签: scikit-learn cross-validation

我尝试用K = 30折进行K折交叉验证,每折使用一个混淆矩阵。如何计算具有置信区间的模型的准确性和混淆矩阵? 有人可以帮我吗?

我的代码是:

import numpy as np
from sklearn import model_selection
from sklearn import datasets
from sklearn import svm
import pandas as pd
from sklearn.linear_model import LogisticRegression

UNSW = pd.read_csv('/home/sec/Desktop/CEFET/tudao.csv')

previsores = UNSW.iloc[:,UNSW.columns.isin(('sload','dload',
                                                   'spkts','dpkts','swin','dwin','smean','dmean',
'sjit','djit','sinpkt','dinpkt','tcprtt','synack','ackdat','ct_srv_src','ct_srv_dst','ct_dst_ltm',
 'ct_src_ltm','ct_src_dport_ltm','ct_dst_sport_ltm','ct_dst_src_ltm')) ].values


classe= UNSW.iloc[:, -1].values


X_train, X_test, y_train, y_test = model_selection.train_test_split(
previsores, classe, test_size=0.4, random_state=0)

print(X_train.shape, y_train.shape)
#((90, 4), (90,))
print(X_test.shape, y_test.shape)
#((60, 4), (60,))

logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
print(previsores.shape)


########K FOLD
print('########K FOLD########K FOLD########K FOLD########K FOLD')
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix

kf = KFold(n_splits=30, random_state=None, shuffle=False)
kf.get_n_splits(previsores)
for train_index, test_index in kf.split(previsores):

    X_train, X_test = previsores[train_index], previsores[test_index]
    y_train, y_test = classe[train_index], classe[test_index]

    logmodel.fit(X_train, y_train)
    print (confusion_matrix(y_test, logmodel.predict(X_test)))
print(10* '#')

1 个答案:

答案 0 :(得分:2)

为了准确起见,我将使用功能cross_val_score来完成您要寻找的功能。它输出30种验证精度的列表,然后您可以计算它们的平均值,标准偏差等,并创建某种类型的置信区间(平均值+-2 * std)

由于混淆矩阵不能被视为性能指标(不是一个单一的数字,而是一个矩阵),我建议创建一个列表,然后迭代地将其附加一个相应的验证混淆矩阵(当前只打印它)。最后,您可以使用此列表提取很多有趣的信息。

更新:

...
...
cm_holder = []
for train_index, test_index in kf.split(previsores):
    X_train, X_test = previsores[train_index], previsores[test_index]
    y_train, y_test = classe[train_index], classe[test_index]

    logmodel.fit(X_train, y_train)
    cm_holder.append(confusion_matrix(y_test, logmodel.predict(X_test))))

请注意,len(cm_holder) = 30,每个元素都是shape=(n_classes, n_classes)的数组。

相关问题