不确定为什么我的损失值在不同时期内增加(张量流中的linreg)

时间:2017-03-30 00:45:39

标签: python tensorflow linear-regression

我知道,TF对于这类问题来说太过分了,但这只是我自己介绍语法和TF训练过程的方法。

以下是代码:

data = pd.read_excel("/Users/madhavthaker/Downloads/Reduced_Car_Data.xlsx")

train = np.random.rand(len(data)) < 0.8

data_train = data[train]
data_test = data[~train]


x_train = data_train.ix[:,0:3].values
y_train = data_train.ix[:,-1].values
x_test = data_test.ix[:,0:3].values
y_test = data_test.ix[:,-1].values

# Build inference graph.
# Create Variables W and b that compute y_data = W * x_data + b
W = tf.Variable(tf.random_normal([3,1]), name='weights')
b = tf.Variable(tf.random_normal([1]), name='bias')

# Uncomment the following lines to see what W and b are.
# print(W)
# print(b)

# Create a placeholder we'll use later to feed x's into the graph for training and eval.
# shape=[None] means we can put in any number of examples. 
# This is used for minibatch training, and to evaluate a lot of examples at once.
x = tf.placeholder(tf.float32,shape=[x_train.shape[0],3], name='x')

# Uncomment this line to see what x is
# print(x)

# This is the same as tf.add(tf.mul(W, x), b), but looks nicer
y = tf.matmul(x,W) + b

# Create a placeholder we'll use later to feed the correct y value into the graph
y_label = tf.placeholder(shape=[y_train.shape[0],], dtype=tf.float32, name='y_label')
# print (y_label)

# Build training graph.
loss = tf.reduce_mean(tf.square(y - y_label))  # Create an operation that calculates loss.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.00001)  # Create an optimizer.
train = optimizer.minimize(loss)  # Create an operation that minimizes loss.

# Uncomment the following 3 lines to see what 'loss', 'optimizer' and 'train' are.
# print("loss:", loss)
# print("optimizer:", optimizer)
# print("train:", train)
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(1000):

        # Display logs per epoch step
        if (epoch+1) % 50 == 0:
            cost_val, hy_val, _ = sess.run([loss, y, train], feed_dict={x: x_train, y_label: y_train})
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(cost_val))

    print("Optimization Finished!")
    training_cost = sess.run(loss, feed_dict={x: x_train, y_label: y_train})
    print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

结果:

Epoch: 0050 cost= 12377621.000000000
Epoch: 0100 cost= 455768801280.000000000
Epoch: 0150 cost= 16799577747226624.000000000
Epoch: 0200 cost= 619229115796003225600.000000000
Epoch: 0250 cost= 22824834360245537040498688.000000000
Epoch: 0300 cost= 841322078804629437979012628480.000000000
Epoch: 0350 cost= 31011140748122347114388001285734400.000000000
Epoch: 0400 cost= inf
Epoch: 0450 cost= inf
Epoch: 0500 cost= inf
Epoch: 0550 cost= inf
Epoch: 0600 cost= inf
Epoch: 0650 cost= inf
Epoch: 0700 cost= inf
Epoch: 0750 cost= inf
Epoch: 0800 cost= inf
Epoch: 0850 cost= nan
Epoch: 0900 cost= nan
Epoch: 0950 cost= nan
Epoch: 1000 cost= nan
Optimization Finished!
Training cost= nan W= [[ nan]
 [ nan]
 [ nan]] b= [ nan] 

我一直盯着这一段时间,我似乎无法弄清楚发生了什么。任何帮助将非常感激。

2 个答案:

答案 0 :(得分:0)

我认为这是由于您的成本函数的形状。实际上可能会出现费用增加的情况,请参阅答案以获得数学解释:https://datascience.stackexchange.com/questions/15962/why-is-learning-rate-causing-my-neural-networks-weights-to-skyrocket

也许尝试降低学习率以确定它是否有帮助。

PS:在每个纪元都没有叫“sess.run(...)”这是正常的吗?

答案 1 :(得分:0)

我认为该模型太小而无法逼近所需的映射。我已按原样运行随机数据代码,损失没有改善。只有当我向模型中再添加一层时,它才有所改进。