我使用的是python 2.7,我真的不明白为什么我会这样做。 我猜这个问题是我的python 2.7导致浮点型问题。
('Epoch', 1, 'completed out of', 10, 'loss:', 49576.683227539062)
('Epoch', 2, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 3, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 4, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 5, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 6, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 7, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 8, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 9, 'completed out of', 10, 'loss:', 0.0)
('Epoch', 10, 'completed out of', 10, 'loss:', 0.0)
('Accuracy:', 1.0)
我的代码如下:
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for epoch in range(hm_epochs):
epoch_loss = 0
i=0
while i < len(train_x):
start = i
end = i+batch_size
batch_x = np.array(train_x[start:end])
batch_y = np.array(train_y[start:end])
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x, # substitute batch_x into it
y: batch_y})
epoch_loss += c
i+=batch_size
print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:test_x, y:test_y}))
答案 0 :(得分:0)
也许你的数据几乎完美,而四舍五入使它达到0.0。您可以尝试在测试集上使用损失函数来查看输出或打印每次迭代的中间输出,以查看损失是否正在减少。
我不担心这个,因为你的测试精度是100%因此是最佳的。你还想要什么?