为什么张量流线性回归预测所有0?

时间:2017-09-24 02:21:08

标签: python machine-learning tensorflow linear-regression gradient-descent

我想用张量流实现线性回归。但我不知道它有什么问题。如果我只训练一次,预测结果将全部为0.如果我训练更多,则损失增加而不是减少。   谁能帮我?非常感谢!

# Step2
x = tf.placeholder(tf.float64, [None, 14])
y_ = tf.placeholder(tf.float64, [None])
# Step3
feature_size = int(x.shape[1])
label_size = int(1)
w = tf.Variable(tf.zeros([feature_size, label_size], dtype='float64'), name='weight')
b = tf.Variable(tf.zeros([label_size], dtype='float64'), name='bias')
# Step4
y = tf.matmul(x, w) + b
# Step5
loss = tf.reduce_sum(tf.square(y-y_))# + tf.matmul(tf.transpose(w), w)
# Step6
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
with tf.Session() as sess:
    # Step7
    tf.global_variables_initializer().run()
    train_loss, _, w_, b_, y_pred = sess.run([loss, optimizer, w , b , y],
                                            {x: X_train.as_matrix(), y_: y_train.as_matrix()})

如果我用以下代码显示结果:

    print("The train_loss is :{0}".format(train_loss))
    print("y_pred shape:{1}\n y_pred value{0}".format(y_pred, y_pred.shape))
    print("w_:{0}".format(w_))
    print("b_:{0}".format(b_))

结果是:

The train_loss is :25366.999902840118
y_pred shape:(151, 1)
 y_pred value[[ 0.]
 [ 0.]
 [ 0.]
 [ 0.]
 [ 0.]
...
 [ 0.]]
w_:[[ -4197.62931207]
 [ -5012.08767412]
 [-12005.66678623]
 [ 16558.73513235]
 [ -7305.34601191]
 [ -5714.5346788 ]
 [ -9633.25591793]
 [-12477.03557256]
 [ -9630.39349598]
 [ -7365.70395179]
 [-11168.48902116]
 [ -6483.21729379]
 [  2177.84048453]
 [ -3059.72968574]]
b_:[ 24045.6024]

但数据使用libsvm babyfat_scale数据集:

1.0708 1:-0.482105 2:-0.966102 3:-0.707746 4:0.585492 5:-0.492537 6:-0.514938 7:-0.598475 8:-0.69697 9:-0.411471 10:-0.465839 11:-0.621622 12:-0.287129 13:-0.0791367 14:-0.535714 
1.0853 1:-0.743158 2:-1 3:-0.552422 4:0.772021 5:-0.263682 6:-0.497364 7:-0.654384 8:-0.562998 9:-0.426434 10:-0.465839 11:-0.418919 12:-0.435644 13:0.136691 14:-0.142857 

如果我尝试使用相同的数据训练100次:

for i in range(100):
        train_loss, _, w_, b_, y_pred = sess.run([loss, optimizer, w , b , y],
                                                {x: X_train.as_matrix(), y_: y_train.as_matrix()})

损失增加而不是减少!!!为什么? 请帮我!非常感谢!

1 个答案:

答案 0 :(得分:0)

您使用哪些数据来训练模型? 尝试降低梯度下降的步长值 optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)并运行1000次或许或更多次