我一直在执行“平均降低准确度”'此website上显示的衡量标准:
在示例中,作者使用随机森林回归器RandomForestRegressor
,但我使用的是随机森林分类器RandomForestClassifier
。因此,我的问题是,如果我还应该使用r2_score
来衡量准确度,或者我是否应该转换为经典准确度accuracy_score
或matthews相关系数matthews_corrcoef
?
如果我应该切换,是否有人在这里。为什么?
感谢您的帮助!
以下是网站上的代码,以防你懒得点击:)
from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import r2_score
from collections import defaultdict
X = boston["data"]
Y = boston["target"]
rf = RandomForestRegressor()
scores = defaultdict(list)
#crossvalidate the scores on a number of different random splits of the data
for train_idx, test_idx in ShuffleSplit(len(X), 100, .3):
X_train, X_test = X[train_idx], X[test_idx]
Y_train, Y_test = Y[train_idx], Y[test_idx]
r = rf.fit(X_train, Y_train)
acc = r2_score(Y_test, rf.predict(X_test))
for i in range(X.shape[1]):
X_t = X_test.copy()
np.random.shuffle(X_t[:, i])
shuff_acc = r2_score(Y_test, rf.predict(X_t))
scores[names[i]].append((acc-shuff_acc)/acc)
print "Features sorted by their score:"
print sorted([(round(np.mean(score), 4), feat) for
feat, score in scores.items()], reverse=True)
答案 0 :(得分:2)
r2_score
用于回归(连续响应变量),而经典分类(离散分类变量)指标,如accuracy_score
和f1_score
roc_auc
(最后两个是最多的)适当的,如果你有不平衡的y标签)是你的任务的正确选择。
随机改组输入数据矩阵中的每个要素并测量这些分类指标的下降听起来像是对要素重要性进行排名的有效方法。