这是我的第一个张量流程步骤,如果其他人遇到与我相同的问题并且有解决方法的话,我希望这样做。
我正在编写mnist教程,我当前的代码片段是:
#placeholder for input
x = tf.placeholder(tf.float32,[None,784]) # None means a dimension can be of any length
#Weights for the model: 784 pixel maps to ten results
W = tf.Variable(tf.zeros([784,10]))
#bias
b = tf.Variable( tf.zeros([10]))
#implementing the model
y = tf.matmul(x,W) + b
#implementing cross-entropy
y_ = tf.placeholder(tf.float32,[None,10])
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
for _ in range(1000):
batch_xs, batch_xy64 = mnist.train.next_batch(100)
batch_xy = batch_xy64.astype(np.float32)
sess.run(train_step , feed_dict={x:batch_xs,y:batch_xy})
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print (sess.run(accuracy,feed_dict={x:mnist.test.images, y_:mnist.test.labels}))
首先,我尝试了MNIST描述中的cross_entropy和提供的源代码中的cross_entropy,这没有任何区别。
请注意,我明确尝试强制转换batch_xy,因为它以float 64返回。
这似乎也是问题,因为在session.run中,float32张量和变量似乎是预期的。
就我看到调试代码而言,mnist中的labes返回为float64 - 也许这解释了我的错误:
... File "/home/braunalx/python-workspace/LearnTensorFlow/firstSteps/MNIST_Start.py", line 40, in mnist_run y_ = tf.placeholder(tf.float32,[None,10]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1548, in placeholder return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name) ... InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,10] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
提供的mnist数据是否有任何问题?
答案 0 :(得分:0)
错误表明您没有为所需的占位符提供值。将y
替换为此行y_
上的sess.run(train_step , feed_dict={x:batch_xs,y:batch_xy})
。