当训练CNN时,NaN中的张量流熵用于大输入

时间:2016-07-27 08:50:40

标签: tensorflow deep-learning convolution entropy cross-entropy

我用TensorFlow创建了一个简单的卷积神经元网络。 当我使用edge = 32px的输入图像时,网络工作正常,但如果我将边缘增加两次到64px,那么熵retutrs为NaN。问题是如何解决这个问题?

CNN结构很简单,看起来像: 的输入 - > conv-> pool2-> conv-> pool2-> conv-> pool2-> FC-> SOFTMAX

熵计算如下:

prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))       # loss
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
train_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(ys, 1))
train_accuracy = tf.reduce_mean(tf.cast(train_pred, tf.float32))

对于64px我有:

train_accuracy=0.09000000357627869, cross_entropy=nan, test_accuracy=0.1428571492433548
train_accuracy=0.2800000011920929, cross_entropy=nan, test_accuracy=0.1428571492433548
train_accuracy=0.27000001072883606, cross_entropy=nan, test_accuracy=0.1428571492433548

对于32px它看起来很好,训练给出了结果:

train_accuracy=0.07999999821186066, cross_entropy=20.63970184326172, test_accuracy=0.15000000596046448
train_accuracy=0.18000000715255737, cross_entropy=15.00744342803955, test_accuracy=0.1428571492433548
train_accuracy=0.18000000715255737, cross_entropy=12.469900131225586, test_accuracy=0.13571429252624512
train_accuracy=0.23000000417232513, cross_entropy=10.289153099060059, test_accuracy=0.11428571492433548

1 个答案:

答案 0 :(得分:1)

据我所知,当你计算 log(0)时会发生 NAN 。我遇到了同样的问题。

tf.log(prediction) #This is a problem when the predicted value is 0.

您可以通过在预测中添加一点噪音来避免这种情况(related 1related 2)。

tf.log(prediction + 1e-10)

或者使用tensorflow中的clip_by_value函数,它定义传递张量的最小值和最大值。这样的事情(Documentation):

tf.log(tf.clip_by_value(prediction, 1e-10,1.0))

希望它有所帮助。