我不清楚为什么在使用<span />
之前必须标准化数据集。尽管原始数据正常化,但为什么它必须导致正确的结果并不明显。
作为测试,我准备了一些数据集:
sklearn.linear_model.LinearRegression
其中:
import numpy as np
import pandas as pd
size = 700
data['x_1'] = x_1
data['x_2'] = x_2
data['y'] = map(lambda i : x_1[i]*7.5 - 2*x_2[i] + noise[i], range(size))
然后我尝试使用带有混洗和标准化矩阵的LinearRegression查找系数:
noise = np.random.normal(0,1,size)
x_1 = np.random.normal(5,2,size)
x_2 = np.random.normal(2,1,size)
,结果如下:
from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(data, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["y"]
之后,我重复了所有没有scale()函数的步骤,并收到了更好的结果:
linear_regressor.fit(X,y)
(14.951827073780766, 'x_1')
(-1.9171042297858722, 'x_2')
这只是一个例外还是我犯了一些错误?
答案 0 :(得分:3)
标准化并不是线性回归的必要条件。这是一个例子,我将数据分成训练/测试分割,然后在测试中进行预测。
>>> df = pd.DataFrame({'x_1': np.random.normal(0, 1, size), 'x_2': np.random.normal(2, 1, size)})
>>> df['y'] = map(lambda i: df['x_1'][i] * 7.5 - 2 * df['x_2'][i] + np.random.normal(0, 1, size)[i], range(size))
>>> lr = LinearRegression()
>>> X_scaled = scale(df[['x_1', 'x_2']])
>>> X_ns = df[['x_1', 'x_2']]
>>> y = df['y']
>>> train_X_scaled = X_scaled[:-100]
>>> test_X_scaled = X_scaled[-100:]
>>> train_X_ns = X_ns[:-100]
>>> test_X_ns = X_ns[-100:]
>>> train_y = y[:-100]
>>> test_y = y[-100:]
>>> lr.fit(train_X_scaled, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.coef_
array([ 7.38189303, -2.04137514])
>>> lr.predict(test_X_scaled)
array([ -5.12130597, -21.58547658, -10.59483732, -10.56241312,
-16.88790301, 0.61347437, -7.28207791, -9.37464865,
-5.12411501, -14.79287322, -9.84583896, 0.61183408,
-9.00695481, -0.42201284, -20.50254306, 0.1984764 ,
-9.57419381, 1.39035118, 9.66405865, -10.18972252,
-8.76733834, -7.33179222, -10.53075411, 0.51671133,
3.65140463, -16.86740729, 7.86837224, 4.61310894,
-3.80289123, -11.92948864, -6.55643999, -10.77231532,
1.97181141, 15.75089958, 2.71987359, -5.49740398,
-6.59654793, -6.39042298, -8.86057313, 12.63031921,
-8.05054779, -11.04476828, -3.70610232, -4.81986166,
-3.09909457, 10.3576317 , -6.48789854, -4.05243726,
-4.11076559, -9.21957658, -4.36368549, 2.13365208,
-19.24153319, 6.52751487, -3.48801127, 2.01989782,
-1.00673834, -10.33590131, -9.25592347, -16.91433355,
3.58685085, -6.30149903, -2.23264539, 6.86114404,
8.33602945, -14.25656579, -22.24380384, -14.50287259,
-6.64710009, -17.40421316, -12.7734427 , -3.76204612,
-0.05843445, -5.0349674 , -6.86404519, -6.8523112 ,
-14.9479788 , 1.6120415 , -6.24457762, -7.11712009,
-5.57018237, -2.89811595, -5.44008672, 8.19302959,
-1.78437334, -19.32108323, 1.00091276, 4.79161569,
1.65685676, -8.68406543, 7.27219645, -2.90941943,
2.4613977 , 2.94533763, -6.35486958, -1.01281799,
2.13959957, -6.73934486, -1.65493937, 13.2605013 ])
>>> lr.fit(train_X_ns, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.coef_
array([ 7.52554825, -1.98783572])
>>> lr.predict(test_X_ns)
array([ -5.12130597, -21.58547658, -10.59483732, -10.56241312,
-16.88790301, 0.61347437, -7.28207791, -9.37464865,
-5.12411501, -14.79287322, -9.84583896, 0.61183408,
-9.00695481, -0.42201284, -20.50254306, 0.1984764 ,
-9.57419381, 1.39035118, 9.66405865, -10.18972252,
-8.76733834, -7.33179222, -10.53075411, 0.51671133,
3.65140463, -16.86740729, 7.86837224, 4.61310894,
-3.80289123, -11.92948864, -6.55643999, -10.77231532,
1.97181141, 15.75089958, 2.71987359, -5.49740398,
-6.59654793, -6.39042298, -8.86057313, 12.63031921,
-8.05054779, -11.04476828, -3.70610232, -4.81986166,
-3.09909457, 10.3576317 , -6.48789854, -4.05243726,
-4.11076559, -9.21957658, -4.36368549, 2.13365208,
-19.24153319, 6.52751487, -3.48801127, 2.01989782,
-1.00673834, -10.33590131, -9.25592347, -16.91433355,
3.58685085, -6.30149903, -2.23264539, 6.86114404,
8.33602945, -14.25656579, -22.24380384, -14.50287259,
-6.64710009, -17.40421316, -12.7734427 , -3.76204612,
-0.05843445, -5.0349674 , -6.86404519, -6.8523112 ,
-14.9479788 , 1.6120415 , -6.24457762, -7.11712009,
-5.57018237, -2.89811595, -5.44008672, 8.19302959,
-1.78437334, -19.32108323, 1.00091276, 4.79161569,
1.65685676, -8.68406543, 7.27219645, -2.90941943,
2.4613977 , 2.94533763, -6.35486958, -1.01281799,
2.13959957, -6.73934486, -1.65493937, 13.2605013 ])
分数也是一样的:
>>> lr.fit(train_X_ns, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.score(test_X_ns, test_y)
0.9829300206380267
>>> lr.fit(train_X_scaled, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.score(test_X_scaled, test_y)
0.9829300206380267
那为什么要标准化呢?因为它没有伤害。在管道中,您可能会添加其他步骤,例如群集或PCA,这需要进行缩放。请记住,如果要应用缩放,您还希望将其应用于评分数据集。在这种情况下,需要使用StandardScaler,因为它有fit
和transform
。对于我的例子,我使用了scale
,因为我在分割之前将它应用于我的火车和测试。但是,在现实生活场景中,您的未来数据未知,因此您希望使用StandardScaler
根据训练集中的mu和std进行转换。
答案 1 :(得分:2)
sklearn.preprocessing.scale()
通过减去均值(mu
)并除以标准差(sigma
)来转换变量:
x_scaled = (x - mu) / sigma
在您的情况下,mu
的{{1}}和sigma
值分别为5和2。所以调用scale,将从每个x1
中减去5并将其除以2.
移位不影响线性回归系数 - 它只是改变截距。但规模不同。如果x1和y之间的关系由下式给出:
x1
我们将y = a*x1 # where the coefficient a is a constant
除以2,然后你需要将系数加倍以保持相同的关系。
在此示例中,x1
不受影响,因为其x2
为1。
没有缩放的系数/截距:
sigma
通过缩放:
linear_regressor = LinearRegression()
linear_regressor.fit(X,y)
print(linear_regressor.coef_)
print(linear_regressor.intercept_)
#[ 7.48676034 -1.99400201]
#0.0253066229528
在第二种情况下,您将获得X_scaled = scale(df_shuffled[df_shuffled.columns[:-1]])
linear_regressor2 = LinearRegression()
linear_regressor2.fit(X_scaled,y)
print(linear_regressor2.coef_)
print(linear_regressor2.intercept_)
#[ 14.90368565 -1.94029573]
#33.7451724511
和x1
的缩放版本的系数和截距。
这不是问题或错误。所有这一切意味着,如果您使用拟合模型进行预测,则只需对新数据应用相同的转换。