我如何解决这个问题,我的代码无法与tensorflow一起使用?

时间:2019-07-31 04:50:33

标签: python python-3.x tensorflow linear-regression

我的代码中有一些问题,但我不知道如何解决。我有目标:y = x / 3-8。输入:X_train:从-10到10的浮点数组,Y_train:从目标创建一个数组,并添加了一点噪音。我使用梯度下降法优化损失函数。

import tensorflow as tf
import numpy as np
import sklearn as skl
import scipy as sci
import pandas as pd
import seaborn as sb
import matplotlib as mplt
import matplotlib.pyplot as plt

def fun(x):
    return (1.0/3.0)*x - 8;
def generate_data(N):
    X_train = np.random.uniform(-10, 10 , size = N);
    #print(X_train);
    Y_train = fun(X_train) + X_train/50;#X_train/50 la tao noise
    #print(fun(X_train))
    #print(Y_train);
    return X_train, Y_train;

X_train, Y_train = generate_data(100);
print(X_train[0:5])
print(Y_train[0:5])
#plt.scatter(x = X_train, y = Y_train)
#target: y = x/3 - 8
W = tf.Variable([np.random.random()],dtype = tf.float32);
b = tf.Variable([np.random.random()],dtype = tf.float32);
X = tf.compat.v1.placeholder(tf.float32);
Y = tf.compat.v1.placeholder(tf.float32);
#-----------------------------------------
linear_model = W*X + b;
#-----------------------------------------
loss_value = tf.reduce_sum(tf.square(linear_model - Y));
#-------------------------------------------
gradient_op = tf.compat.v1.train.GradientDescentOptimizer(0.01);
train =  gradient_op.minimize(loss_value);
init = tf.compat.v1.global_variables_initializer()
sess = tf.compat.v1.Session()
sess.run(init) # reset values to wrong
#--------------------------------------------
for i in range(1000):
    sess.run(train, {X:X_train, Y:Y_train})
#-------------------------------------------------
curr_W, curr_b, curr_loss = sess.run([W, b, loss_value], {X:X_train, Y:Y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
#RESULT: W: [nan] b: [nan] loss: nan

1 个答案:

答案 0 :(得分:1)

如果在for循环中移动print()行,则可以更好地了解发生的情况:

W: [18.353216] b: [-16.890762] loss: 1247183.4
W: [-1346.8429] b: [24.758984] loss: 6829195000.0
W: [99991.945] b: [-1827.1696] loss: 37613004000000.0
W: [-7420763.] b: [134402.12] loss: 2.0716051e+17
W: [5.507228e+08] b: [-9974508.] loss: 1.14097444e+21
W: [-4.087121e+10] b: [7.402444e+08] loss: 6.284125e+24
W: [3.0332058e+12] b: [-5.4936326e+10] loss: 3.4610951e+28
W: [-2.2510564e+14] b: [4.0770313e+12] loss: 1.9062617e+32
W: [1.6705936e+16] b: [-3.0257186e+14] loss: 1.0499083e+36
W: [-1.2398103e+18] b: [2.245499e+16] loss: inf
W: [9.201099e+19] b: [-1.6664696e+18] loss: inf
W: [-6.828481e+21] b: [1.2367498e+20] loss: inf
W: [5.0676726e+23] b: [-9.178383e+21] loss: inf
W: [-3.7609104e+25] b: [6.8116245e+23] loss: inf
W: [2.791113e+27] b: [-5.0551633e+25] loss: inf
W: [-2.07139e+29] b: [3.7516253e+27] loss: inf
W: [1.5372564e+31] b: [-2.7842225e+29] loss: inf
W: [-1.14085575e+33] b: [2.066276e+31] loss: inf
W: [8.46672e+34] b: [-1.5334612e+33] loss: inf
W: [-inf] b: [1.1380394e+35] loss: inf

您可以看到损失正在“爆炸”。这是exploding gradient problem的简单示例。

您可以阅读潜在的解决方案,但最简单的玩具示例可能是降低学习率。

直觉上,梯度下降就像是尝试通过指向下坡方向并迈出一步然后再重复来找到通往谷底的道路。在每个阶段,您都根据当前的下坡情况重新评估方向。如果山谷是光滑的,没有局部低点,并且步长足够小,那么您最终将找到底部。

学习速度类似于步长。

因此,由于学习率太高,您现在可以想象您正在迈出如此大的步伐,以至于跨过整个山谷到达对面山上更高的一点。然后转弯以再次指向下坡(大约180度)并面向山谷的中心,但向右越过,越过另一侧更高。依此类推,越来越高的山谷对面

因此,将学习率显着降低到这样的水平似乎可以使其收敛:

...
gradient_op = tf.compat.v1.train.GradientDescentOptimizer(0.0001)
...
W: [0.35333326] b: [-7.999988] loss: 1.4234502e-08