SGDRegressor无意义的结果

时间:2015-07-16 01:14:23

标签: python statistics scikit-learn

我尝试为回归创建一个简单的测试用例x的线性函数,但SGDRegressor给出了错误的结果

import numpy as np
from sklearn.linear_model import SGDRegressor
from random import random
X = np.array(range(1000))
y = np.array([x + random() for x in X])
X = X.reshape(1000,1)
sgd = SGDRegressor()
sgd.fit(X, y)
print [sgd.intercept_, sgd.coef_]
  

[array([-4.13761484e + 08]),array([-9.66320825e + 10])]

2 个答案:

答案 0 :(得分:3)

尝试设置比默认值0.01更低的初始学习率,例如:

import numpy as np
from sklearn.linear_model import SGDRegressor
from random import random
X = np.array(range(1000))
y = np.array([x + random() for x in X])
X = X.reshape(1000,1)
sgd = SGDRegressor(eta0=0.000001)
sgd.fit(X, y)
print [sgd.intercept_, sgd.coef_]

输出:

[array([ 0.00648436]), array([ 1.00053978])]

编辑:我不确定确切原因,但Xy中包含的较大值似乎会导致一些数值稳定性问题。在verbose=1中设置SGDRegressor,它会以默认学习率显示以下输出:

-- Epoch 1
Norm: nan, NNZs: 1, Bias: nan, T: 1000, Avg. loss: nan
Total training time: 0.00 seconds.

这意味着内部计算以某种方式溢出。使用eta=0.000001

-- Epoch 1
Norm: 1.00, NNZs: 1, Bias: 0.006449, T: 1000, Avg. loss: 873.136013
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 1.00, NNZs: 1, Bias: 0.006461, T: 2000, Avg. loss: 436.597862
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 1.00, NNZs: 1, Bias: 0.006471, T: 3000, Avg. loss: 291.085373
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 1.00, NNZs: 1, Bias: 0.006481, T: 4000, Avg. loss: 218.329235
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 1.00, NNZs: 1, Bias: 0.006491, T: 5000, Avg. loss: 174.675614
Total training time: 0.00 seconds.
[array([ 0.00649087]), array([ 1.00035165])]

另一种可能的方法是预先将数据(输入和输出)缩放到正常范围,例如,使用StandardScaler。完成预处理后,默认参数运行良好。

答案 1 :(得分:3)

我认为这与以下事实有关:将random()添加到0到1000之间的int对它们变大时对int的影响很小。使用StandardScaler作为预处理步骤进行功能扩展可能有所帮助。

根据Sklearn的实际使用提示

  

随机梯度下降对特征缩放很敏感,所以它是   强烈建议您扩展数据。

在修改你的例子并且没有使用特征缩放之后,我注意到的参数组合有所不同:loss,n_iter,eta0和power_t是要关注的 - eta0是主要参数。 SGDRegressor默认值太高,无法解决此问题。

import numpy as np
from sklearn.linear_model import SGDRegressor
from random import random
import matplotlib.pyplot as plt
import itertools

X = np.array(range(1000))
y = np.array([x + random() for x in X])
X = X.reshape(-1,1)

fig,ax = plt.subplots(2, 2, figsize=(8,6))
coords = itertools.product([0,1], repeat=2)

for coord,loss in zip(coords, ['huber', 'epsilon_insensitive',
                               'squared_epsilon_insensitive', 'squared_loss']):
  row,col = coord
  ax[row][col].plot(X, y, 'k:', label='actual', linewidth=2)
  for iteration in [5, 500, 1000, 5000]: # or try range(1, 11)
    sgd = SGDRegressor(loss=loss, n_iter=iteration, eta0=0.00001, power_t=0.15)
    sgd.fit(X, y)
    y_pred = sgd.intercept_[0] + (sgd.coef_[0] * X)
    print('Loss:', loss, 'n_iter:', iteration, 'intercept, coef:',
          [sgd.intercept_[0], sgd.coef_[0]], 'SSE:', ((y - sgd.predict(X))**2).sum())

    ax[row][col].plot(X, y_pred, label='n_iter: '+str(iteration))
    ax[row][col].legend()
    ax[row][col].set_title(loss)
    plt.setp(ax[row][col].legend_.get_texts(), fontsize='xx-small')


plt.tight_layout()
plt.show()

这是打印出来的内容:

Loss: huber n_iter: 5 intercept, coef: [0.001638952911639975, 0.81740614500327669] SSE: 11185831.2597
Loss: huber n_iter: 500 intercept, coef: [0.021493133105072931, 1.0006662185561777] SSE: 137.574163486
Loss: huber n_iter: 1000 intercept, coef: [0.037047745354150396, 1.0006161110073943] SSE: 134.784858635
Loss: huber n_iter: 5000 intercept, coef: [0.12718334969902309, 1.0006005570641865] SSE: 116.13213201
Loss: epsilon_insensitive n_iter: 5 intercept, coef: [0.0046948965851395814, 1.0005010438267816] SSE: 157.935817311
Loss: epsilon_insensitive n_iter: 500 intercept, coef: [0.15261696111333306, 0.99963762449395877] SSE: 359.657749786
Loss: epsilon_insensitive n_iter: 1000 intercept, coef: [0.24224930972696881, 1.0006671880072746] SSE: 126.805962732
Loss: epsilon_insensitive n_iter: 5000 intercept, coef: [0.45888370500803022, 1.0003153040071979] SSE: 106.091573864
Loss: squared_epsilon_insensitive n_iter: 5 intercept, coef: [1774329.1447094907, -113423.55986319004] SSE: 4.08404355317e+18
Loss: squared_epsilon_insensitive n_iter: 500 intercept, coef: [42274920.182269663, -104909.90969312852] SSE: 1.01976866207e+18
Loss: squared_epsilon_insensitive n_iter: 1000 intercept, coef: [22843691.320190568, -37289.079052061767] SSE: 1.33664638821e+17
Loss: squared_epsilon_insensitive n_iter: 5000 intercept, coef: [3165399.5624849019, -3391.4406385053994] SSE: 3.12252668162e+15
Loss: squared_loss n_iter: 5 intercept, coef: [0.29805062264896459, 1.0006351157532956] SSE: 131.697873311
Loss: squared_loss n_iter: 500 intercept, coef: [0.66256539671809789, 1.0001831768155882] SSE: 154.277820955
Loss: squared_loss n_iter: 1000 intercept, coef: [0.13753387481588603, 1.0006362052460742] SSE: 117.151466521
Loss: squared_loss n_iter: 5000 intercept, coef: [0.38191334428572482, 1.0000364177730059] SSE: 89.3183008079

这就是它的绘图(注意:每次重新运行时这都会改变,所以你的输出可能与我的不同):

SGD Loss Responses

值得注意的是squared_epsilon_insensitive的y轴被遗忘而其他三个损失函数保持在预期的范围内。

为了好玩,请将power_t 0.15 更改为 0.5 。这会产生影响的原因是learning_rate计算的默认'invscaling'参数为eta = eta0 / pow(t, power_t)

SGD is a sensitive flower