为什么使用softmax_cross_entropy_with_logits将softmax和crossentropy分别产生不同的结果呢?

时间:2017-05-12 04:50:08

标签: python tensorflow deep-learning

我正在制作一台计算机,使用softmax函数预测来自MNist数据集的手写数字。还有一些奇怪的事发生了。成本随着时间的推移而逐渐减少并最终变成0.0038左右......(我使用softmax_crossentropy_with_logits()作为成本函数)但是,准确率相当低至33%。所以我想“好吧......我不知道那里发生了什么,但如果我分别做softmax和crossentropy,它可能会产生不同的结果!”和繁荣!准确率上升到89%。我不知道为什么分别做softmax和crossentropy会产生不同的结果。我甚至在这里查了一下:difference between tensorflow tf.nn.softmax and tf.nn.softmax_cross_entropy_with_logits

所以这是我使用softmax_cross_entropy_with_logits()作为代价函数的代码(准确度:33%)

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

X = tf.placeholder(shape=[None,784],dtype=tf.float32)
Y = tf.placeholder(shape=[None,10],dtype=tf.float32)

W1= tf.Variable(tf.random_normal([784,20]))
b1= tf.Variable(tf.random_normal([20]))
layer1 = tf.nn.softmax(tf.matmul(X,W1)+b1)

W2 = tf.Variable(tf.random_normal([20,10]))
b2 = tf.Variable(tf.random_normal([10]))

logits = tf.matmul(layer1,W2)+b2
hypothesis = tf.nn.softmax(logits) # just so I can figure our the accuracy 

cost_i= tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=Y)
cost = tf.reduce_mean(cost_i)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)


batch_size  = 100
train_epoch = 25
display_step = 1
with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())

    for epoch in range(train_epoch):
        av_cost = 0
        total_batch = int(mnist.train.num_examples / batch_size)
        for batch in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(optimizer,feed_dict={X:batch_xs,Y:batch_ys})
        av_cost  += sess.run(cost,feed_dict={X:batch_xs,Y:batch_ys})/total_batch
        if epoch % display_step == 0:  # Softmax
            print ("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(av_cost))
    print ("Optimization Finished!")

    correct_prediction = tf.equal(tf.argmax(hypothesis,1),tf.argmax(Y,1))
    accuray = tf.reduce_mean(tf.cast(correct_prediction,'float32'))
    print("Accuracy:",sess.run(accuray,feed_dict={X:mnist.test.images,Y:mnist.test.labels}))

这是我分别做softmax和cross_entr的那个(准确度:89%)

import tensorflow as tf  #89 % accuracy one 
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

X = tf.placeholder(shape=[None,784],dtype=tf.float32)
Y = tf.placeholder(shape=[None,10],dtype=tf.float32)

W1= tf.Variable(tf.random_normal([784,20]))
b1= tf.Variable(tf.random_normal([20]))
layer1 = tf.nn.softmax(tf.matmul(X,W1)+b1)

W2 = tf.Variable(tf.random_normal([20,10]))
b2 = tf.Variable(tf.random_normal([10]))


#logits = tf.matmul(layer1,W2)+b2
#cost_i= tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=Y)

logits = tf.matmul(layer1,W2)+b2

hypothesis = tf.nn.softmax(logits)
cost = tf.reduce_mean(tf.reduce_sum(-Y*tf.log(hypothesis)))


optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)

batch_size  = 100
train_epoch = 25
display_step = 1
with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())

    for epoch in range(train_epoch):
        av_cost = 0
        total_batch = int(mnist.train.num_examples / batch_size)
        for batch in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(optimizer,feed_dict={X:batch_xs,Y:batch_ys})
        av_cost  += sess.run(cost,feed_dict={X:batch_xs,Y:batch_ys})/total_batch
        if epoch % display_step == 0:  # Softmax
            print ("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(av_cost))
    print ("Optimization Finished!")

    correct_prediction = tf.equal(tf.argmax(hypothesis,1),tf.argmax(Y,1))
    accuray = tf.reduce_mean(tf.cast(correct_prediction,'float32'))
    print("Accuracy:",sess.run(accuray,feed_dict={X:mnist.test.images,Y:mnist.test.labels}))

2 个答案:

答案 0 :(得分:2)

如果您在上部示例中使用tf.reduce_sum(),就像在较低示例中使用cost = tf.reduce_mean(tf.reduce_sum( tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y)))一样,您应该能够使用这两种方法获得类似的结果:tf.nn.softmax_cross_entropy_with_logits()

我将训练时期的数量增加到50并且达到了93.06%({{1}})和93.24%(softmax和交叉熵分别)的准确度,因此结果非常相似。

答案 1 :(得分:2)

从Tensorflow API here第二种方式,cost = tf.reduce_mean(tf.reduce_sum(-Y*tf.log(hypothesis)))在数值上不稳定,因此您无法获得相同的结果,

无论如何,你可以在GitHub找到数值稳定的交叉熵损失函数的实现,它与tf.nn.softmax_cross_entropy_with_logits()函数具有相同的结果。

您可以看到tf.nn.softmax_cross_entropy_with_logits()没有计算大数字softmax标准化,只是近似它们,更多细节在README部分。