Learning_curve错误

时间:2017-03-15 23:01:22

标签: python sklearn-pandas

我尝试使用plot_learning_curve绘制下面的逻辑回归,但得到了错误。有人可以帮忙吗?

from sklearn.linear_model import LogisticRegression

lg = LogisticRegression(random_state=42, penalty='l1')
parameters = {'C':[0.5]}


# Use classification accuracy to compare parameter combinations
acc_scorer_lg = make_scorer(accuracy_score)

# Run a grid search for the Logistic Regression classifier and all the selected parameters
grid_obj_lg = GridSearchCV(lg, parameters, scoring=acc_scorer_lg)
grid_obj_lg = grid_obj_lg.fit(x_train, y_train)

# Set our classifier, lg, to have the best combination of parameters
lg = grid_obj_lg.best_estimator_

# Fit the selected classifier to the training data. 
lg.fit(x_train, y_train)

这是learning_curve代码

predictions_lg = lg.predict(x_test)
print(accuracy_score(y_test, predictions_lg))

plot_learning_curve(lg, 'Logistic Regression', X, Y, cv=7);

错误消息:

ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: '0'

根据要求,这是plot_learning_curve的代码。代码来自http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html

from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer, accuracy_score

from sklearn.ensemble import RandomForestClassifier


def plot_learning_curve(estimator, title, X, Y, ylim=None, cv=None, n_jobs=1,\
                        train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):

    plt.figure(figsize=(10,6))
    plt.title(title)

    if ylim is not None:
        plt.ylim(*ylim)

    plt.xlabel("Training examples")
    plt.ylabel(scoring)

    train_sizes, train_scores, test_scores = learning_curve(estimator, X, Y, cv=cv, scoring=scoring, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)

    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\
                     train_scores_mean + train_scores_std, alpha=0.1, \
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")

    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
    plt.legend(loc="best")

    return plt

1 个答案:

答案 0 :(得分:0)

尝试将shuffle参数添加到对learning_curve的调用中:

train_sizes, train_scores, test_scores = learning_curve(estimator, X, Y, cv=cv,
 scoring=scoring, n_jobs=n_jobs, train_sizes=train_sizes, shuffle=True)