我想知道为什么confusion_matrix在第二次执行时会发生变化,以及它是否可以避免。更确切地说,我是第一次获得[[53445 597] [958 5000]],但是,当我再次执行它时,我得到了[[52556 1486] [805 5153]]。
# get the data from dataset and split into training-set and test-set
mnist = fetch_openml('mnist_784')
X, y = mnist['data'], mnist['target']
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
# make the data random
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
# true for all y_train='2', false for all others
y_train_2 = (y_train == '2')
y_test_2 = (y_test == '2')
# train the data with a label of T/F depends on whether the data is 2
# I use the random_state as 0, so it will not change, am I right?
sgd_clf = SGDClassifier(random_state=0)
sgd_clf.fit(X_train, y_train_2)
# get the confusion_matrix
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_2, cv=3)
print('confusion_matrix is', confusion_matrix(y_train_2, y_train_pred))
答案 0 :(得分:1)
您在每次运行(shuffle_index
)上都使用不同的数据-因此,没有理由让ML运行和结果混淆矩阵完全相同-尽管如果算法运行良好,结果应该接近工作。
要摆脱随机性,请指定索引:
shuffle_index = np.arange(60000) #Rather "not_shuffled_index"
或使用相同的种子:
np.random.seed(1) #Or any number
shuffle_index = np.random.permutation(60000) #Will be the same for a given seed