如何在sklearn中使用Ridge回归运行GridsearchCV

时间:2019-08-06 13:10:22

标签: scikit-learn

我是从sklearn导入GridsearchCV来做到这一点的。我不知道应该在参数中的数组中给出什么值:

Parameters={'alpha':[array]}
Ridge_reg=GridsearchCV (ridge,parameters,scoring='neg mean squared error',cv=5)
  1. 这正确吗?
  2. 如何查看岭回归图?

1 个答案:

答案 0 :(得分:0)

您发布的代码有多个语法错误,例如GridsearchCVscoring='neg mean squared error'

第一个输入参数应该是一个对象(模型)。

使用此功能:

from sklearn.linear_model import Ridge
import numpy as np
from sklearn.model_selection import GridSearchCV

n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)

parameters = {'alpha':[1, 10]}

# define the model/ estimator
model = Ridge()

# define the grid search
Ridge_reg= GridSearchCV(model, parameters, scoring='neg_mean_squared_error',cv=5)

#fit the grid search
Ridge_reg.fit(X,y)

# best estimator
print(Ridge_reg.best_estimator_)

# best model
best_model = Ridge_reg.best_estimator_
best_model.fit(X,y)
...
...

用于可视化(Ridge系数作为正则化的函数):

import matplotlib.pyplot as plt

alphas = [1, 10]
coefs = []
for a in alphas:
    ridge = Ridge(alpha=a, fit_intercept=False)
    ridge.fit(X, y)
    coefs.append(ridge.coef_)

ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1])  # reverse axis
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.axis('tight')
plt.show()

enter image description here