对于我的逻辑回归模型,我想使用交叉验证(例如:5倍)代替单个测试序列集来评估最佳L1正则化强度,如下代码所示:
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(X_scaled,y, stratify=y, test_size=0.3,
random_state=2)
#Evaluate L1 regularization strengths for reducing features in final model
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero
for c in C:
clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
clf.fit(train_x, train_y)
pred_y=clf.predict(test_x)
print("Model performance with Inverse Regularization Parameteter, C = 1/λ VALUE: ", c)
cr=metrics.classification_report(test_y, pred_y)
print(cr)
print('')
有人可以告诉我如何使用交叉验证(即,不复制上面的代码5次和独特的随机状态)在5个不同的测试序列集上执行此操作吗?
答案 0 :(得分:1)
实际上,classification_report
作为指标未定义为sklearn.model_selection.cross_val_score
内部的得分指标。因此,我将在以下代码中使用f1_micro
:
from sklearn.model_selection import cross_val_score
#Evaluate L1 regularization strengths for reducing features in final model
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero
for c in C:
clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
# using data before splitting (X_scaled) and (y)
scores = cross_val_score(clf, X_scaled, y, cv=5, scoring="f1_micro") #<-- add this
print(scores) #<-- add this
变量scores
现在是五个值的列表,这些值代表您的分类器在原始数据的五个不同分割上的f1_micro
值。
如果您想在sklearn.model_selection.cross_val_score
中使用另一个评分指标,则可以使用以下命令来获取所有可用的评分指标:
print(metrics.SCORERS.keys())
此外,您可以使用多个评分指标;以下同时使用f1_micro
和f1_macro
:
from sklearn.model_selection import cross_validate
cross_validate(clf, X_scaled, y, cv=5, scoring=["f1_micro", "f1_macro"])