成本未在张量流中更新

时间:2017-01-27 02:15:46

标签: python tensorflow

我正在使用Sirajalogy的以下代码。 https://github.com/llSourcell/How_to_use_Tensorflow_for_classification-LIVE/blob/master/demo.ipynbIt 它已被修改为接受我自己的.csv,其尺寸与他的示例中使用的尺寸不同。

import pandas as pd             
import numpy as np               
import matplotlib.pyplot as plt  
import tensorflow as tf          # Fire from the gods
dataframe = pd.read_csv("jfkspxs.csv") 
dataframe = dataframe.drop(["Field6", "Field9", "rowid"], axis=1)

inputX = dataframe.loc[:, ['Field2', 'Field3', 'Field4', 'Field5', 'Field7', 'Field8', 'Field10']].as_matrix()
inputY = dataframe.loc[:, ["y1"]].as_matrix()

learning_rate = 0.001
training_epochs = 2000
display_step = 50
n_samples = inputY.size

x = tf.placeholder(tf.float32, [None, 7])              
W = tf.Variable(tf.zeros([7, 1]))          
b = tf.Variable(tf.zeros([1]))             

y_values = tf.add(tf.matmul(x, W), b)   
y = tf.nn.softmax(y_values)                
y_ = tf.placeholder(tf.float32, [None,1])   

cost = tf.reduce_sum(tf.pow(y_ - y, 2))/(2*n_samples)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

for i in range(training_epochs):  
    sess.run(optimizer, feed_dict={x: inputX, y_: inputY}))
if (i) % display_step == 0:
        cc = sess.run(cost, feed_dict={x: inputX, y_:inputY})
        print ("Training step:", '%04d' % (i), "cost=", "{:.9f}".format(cc)) 

代码正在运行,但会产生以下费用更新。

Training step: 0000 cost= 0.271760166
Training step: 0050 cost= 0.271760166
Training step: 0100 cost= 0.271760166
Training step: 0150 cost= 0.271760166
Training step: 0200 cost= 0.271760166
Training step: 0250 cost= 0.271760166
Training step: 0300 cost= 0.271760166
Training step: 0350 cost= 0.271760166
etc.

问题:为什么每次培训步骤都不会更新费用? 谢谢!

1 个答案:

答案 0 :(得分:1)

问题:您的渐变为零,因此您的权重不会改变。您向softmax提供单维(batch_size,1)。这使得softmax的输出恒定(1)。这使得它的梯度为零。

解决方案:

如果您正在进行逻辑回归,请使用tf.nn.sigmoid_cross_entropy_with_logits(y_values, y_)

如果您正在进行线性回归,请使用(即不要使用softmax): cost = tf.reduce_sum(tf.pow(y_ - y_values, 2))/(2*n_samples)

如果您坚持使用softmax和MSE,请使用以下代替softmax: y = tf.reciprocal(1 + tf.exp(-y_values))