在scikit学习中使用交叉验证对逻辑回归找到最佳的Lasso / L1正则化强度

时间:2020-06-02 18:32:14

标签: python scikit-learn cross-validation lasso-regression

对于我的逻辑回归模型,我想使用交叉验证(例如:5倍)代替单个测试序列集来评估最佳L1正则化强度,如下代码所示:

from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(X_scaled,y, stratify=y, test_size=0.3, 
   random_state=2)

#Evaluate L1 regularization strengths for reducing features in final model 
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero

for c in C:
    clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
    clf.fit(train_x, train_y)
    pred_y=clf.predict(test_x) 
    print("Model performance with Inverse Regularization Parameteter, C = 1/λ VALUE: ", c)
    cr=metrics.classification_report(test_y, pred_y)
    print(cr)
    print('')

有人可以告诉我如何使用交叉验证(即,不复制上面的代码5次和独特的随机状态)在5个不同的测试序列集上执行此操作吗?

1 个答案:

答案 0 :(得分:1)

实际上,classification_report作为指标未定义为sklearn.model_selection.cross_val_score内部的得分指标。因此,我将在以下代码中使用f1_micro

from sklearn.model_selection import cross_val_score

#Evaluate L1 regularization strengths for reducing features in final model 
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero

for c in C:
    clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
    # using data before splitting (X_scaled) and (y)
    scores = cross_val_score(clf, X_scaled, y, cv=5, scoring="f1_micro")  #<-- add this
    print(scores)  #<-- add this

变量scores现在是五个值的列表,这些值代表您的分类器在原始数据的五个不同分割上的f1_micro值。

如果您想在sklearn.model_selection.cross_val_score中使用另一个评分指标,则可以使用以下命令来获取所有可用的评分指标:

print(metrics.SCORERS.keys())

此外,您可以使用多个评分指标;以下同时使用f1_microf1_macro

from sklearn.model_selection import cross_validate

cross_validate(clf, X_scaled, y, cv=5, scoring=["f1_micro", "f1_macro"])