如何获得神经网络的张量流中的误差值

时间:2018-03-19 15:39:16

标签: tensorflow neural-network

我一直在尝试使用以下代码来实现神经网络,但我在显示损失值方面面临挑战。有人可以帮助我吗?

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

x_input = mnist.train.images[:100,:]
y_input = mnist.train.labels[:100,:]


LearningRate = 0.01
noOfEpocs = 10

#N/w params
hidden_1_Neurons = 50
hidden_2_Neurons = 50
inputNeurons = 784
noOfClasses = 10

X  = tf.placeholder(tf.float32, shape=[None, inputNeurons])
Y =  tf.placeholder(tf.float32, shape=[None, 10])

#Let's Fill Data

hidden_1_weights = tf.Variable(tf.random_normal([inputNeurons, hidden_1_Neurons]))
hidden_2_weights = tf.Variable(tf.random_normal([hidden_1_Neurons, hidden_2_Neurons]))
outLayer_weights = tf.Variable(tf.random_normal([hidden_2_Neurons, noOfClasses]))

hidden_1_Bias = tf.Variable(tf.random_normal([hidden_1_Neurons]))
hidden_2_Bias = tf.Variable(tf.random_normal([hidden_2_Neurons]))
outLayer_Bias = tf.Variable(tf.random_normal([noOfClasses]))


hidden_1 =  tf.add(tf.matmul(X ,hidden_1_weights), hidden_1_Bias)
hidden_2 =  tf.add (tf.matmul(hidden_1, hidden_2_weights), hidden_2_Bias)
outLayer  =  tf.add (tf.matmul(hidden_2, outLayer_weights), outLayer_Bias)

softMaxOutput = tf.nn.softmax(outLayer)

cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(softMaxOutput),reduction_indices=[1]))
training = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

sess.run(training, feed_dict={X:x_input, Y:y_input})
actualLoss = sess.run(cross_entropy, feed_dict={X:x_input, Y:y_input})
print ("actualLoss ", actualLoss)

我得到的输出如下:

#actualLoss  nan

我认为它说的是nan(非数字)。这种编码是否适合获得实际损失?

1 个答案:

答案 0 :(得分:2)

您应该删除softmaxOutput变量并使用tf内置softmax_cross_entropy丢失函数,它会应用softmax激活并处理交叉熵损失。

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=outLayer, labels=Y))

但是关于你的问题,这是由于log(softmaxOutput)如果输出是零,它将导致它输出nan,所以你必须为它添加一个非常小的值来克服问题1e-5

cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(softMaxOutput + 1e-5),reduction_indices=[1]))`