上下文:我正在尝试创建一个泛型函数,以使用多项式回归(任何指定程度)来优化任何回归问题的成本。 我正在尝试将模型拟合到load_boston数据集(以房价作为标签和13个要素)。
我使用了多项式多项式,并且使用了多个学习率和历元(具有梯度下降),并且即使在训练数据集上,MSE也是如此之高(我正在使用100%的数据来训练模型,并且我正在检查相同数据上的费用,但MSE费用仍然很高)。
import tensorflow as tf
from sklearn.datasets import load_boston
def polynomial(x, coeffs):
y = 0
for i in range(len(coeffs)):
y += coeffs[i]*x**i
return y
def initial_parameters(dimensions, data_type, degree): # list number of dims/features and degree
thetas = [tf.Variable(0, dtype=data_type)] # the constant theta/bias
for i in range(degree):
thetas.append(tf.Variable( tf.zeros([dimensions, 1], dtype=data_type)))
return thetas
def regression_error(x, y, thetas):
hx = thetas[0] # constant thetas - no need to have 1 for each variable (e.g x^0*th + y^0*th...)
for i in range(1, len(thetas)):
hx = tf.add(hx, tf.matmul( tf.pow(x, i), thetas[i]))
return tf.reduce_mean(tf.squared_difference(hx, y))
def polynomial_regression(x, y, data_type, degree, learning_rate, epoch): #features=dimensions=variables
thetas = initial_parameters(x.shape[1], data_type, degree)
cost = regression_error(x, y, thetas)
init = tf.initialize_all_variables()
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
sess.run(init)
for epoch in range(epoch):
sess.run(optimizer)
return cost.eval()
x, y = load_boston(True) # yes just use the entire dataset
for deg in range(1, 2):
for lr in range(-8, -5):
error = polynomial_regression(x, y, tf.float64, deg, 10**lr, 100 )
print (deg, lr, error)
即使大多数标签在30左右(度= 1,学习率= 10 ^ -6),它也会输出97.3。 代码有什么问题?
答案 0 :(得分:0)
问题在于不同的特征处于不同的数量级,因此与所有特征都相同的学习率不兼容。更重要的是,在使用非零变量初始化时,必须确保这些初始值与特征值也兼容。
In [1]: from sklearn.datasets import load_boston
In [2]: x, y = load_boston(True)
In [3]: x.std(axis=0)
Out[3]:
array([8.58828355e+00, 2.32993957e+01, 6.85357058e+00, 2.53742935e-01,
1.15763115e-01, 7.01922514e-01, 2.81210326e+01, 2.10362836e+00,
8.69865112e+00, 1.68370495e+02, 2.16280519e+00, 9.12046075e+01,
7.13400164e+00])
In [4]: x.mean(axis=0)
Out[4]:
array([3.59376071e+00, 1.13636364e+01, 1.11367787e+01, 6.91699605e-02,
5.54695059e-01, 6.28463439e+00, 6.85749012e+01, 3.79504269e+00,
9.54940711e+00, 4.08237154e+02, 1.84555336e+01, 3.56674032e+02,
1.26530632e+01])
一种常见的方法是对输入数据进行规范化(例如均值和单位方差为零),并随机选择初始权重(例如正态分布,标准差= 1)。 sklearn.preprocessing
为这些情况提供了各种功能。
PolynomialFeatures
可用于自动生成多项式特征。StandardScaler
将数据缩放为零均值和单位方差。pipeline.Pipeline
可以方便地组合这些预处理步骤。然后polynomial_regression
函数简化为:
pipeline = Pipeline([
('poly', PolynomialFeatures(degree)),
('scaler', StandardScaler())
])
x = pipeline.fit_transform(x)
thetas = tf.Variable(tf.random_normal([x.shape[1], 1], dtype=data_type))
cost = tf.reduce_mean(tf.squared_difference(tf.matmul(x, thetas), y))
# Perform variable initialization and optimizer instantiation here.
# Run optimization over epochs.