Tensorflow - 损失增加到NaN

时间:2017-11-12 05:43:22

标签: python tensorflow neural-network deep-learning

我正在进行Udacity的深度学习课程。我观察到的有趣的事情是,对于相同的数据集,我的1层神经网络工作得很好,但是当我添加更多层时,我的损失增加到NaN。

我使用以下博文作为参考:我使用以下博文作为参考:http://www.ritchieng.com/machine-learning/deep-learning/tensorflow/regularization/

这是我的代码:

batch_size = 128
beta = 1e-3

# Network Parameters
n_hidden_1 = 1024 # 1st layer number of neurons
n_hidden_2 = 512 # 2nd layer number of neurons

graph = tf.Graph()
with graph.as_default():
    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32,
                                  shape=(batch_size, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)


    # Variables.
    w1 = tf.Variable(tf.truncated_normal([image_size * image_size, n_hidden_1]))
    w2 = tf.Variable(tf.truncated_normal([n_hidden_1, n_hidden_2],stddev=math.sqrt(2.0/n_hidden_1)))
    w3 = tf.Variable(tf.truncated_normal([n_hidden_2, num_labels],stddev=math.sqrt(2.0/n_hidden_2)))

    b1 = tf.Variable(tf.zeros([n_hidden_1]))
    b2 = tf.Variable(tf.zeros([n_hidden_2]))
    b3 = tf.Variable(tf.zeros([num_labels]))

    # Learning rate decay configs
    global_step = tf.Variable(0, trainable=False)
    starter_learning_rate = 0.5

    # Training computation.
    logits_1 = tf.matmul(tf_train_dataset, w1) + b1
    hidden_layer_1 = tf.nn.relu(logits_1)
    layer_1_dropout = tf.nn.dropout(hidden_layer_1, keep_prob)

    logits_2 = tf.matmul(layer_1_dropout, w2) + b2
    hidden_layer_2 = tf.nn.relu(logits_2)
    layer_2_dropout = tf.nn.dropout(hidden_layer_2, keep_prob)

    # the output logits
    logits_3 = tf.matmul(layer_2_dropout, w3) + b3


    # Normal Loss
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_3, labels=tf_train_labels))

    learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 10000, 0.96)
    # Optimizer.
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

num_steps = 3001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    for step in range(num_steps):

    // some logic to get training data batches

    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
  [optimizer, loss, train_prediction], feed_dict=feed_dict)

  print("Minibatch loss at step %d: %f" % (step, l))

打印丢失后,我发现这种情况会以指数方式增加到NaN:

Minibatch loss at step 1: 7474.770508 Minibatch loss at step 2: 43229.820312 Minibatch loss at step 3: 50132.988281 Minibatch loss at step 4: 10196093.000000 Minibatch loss at step 5: 3162884096.000000 Minibatch loss at step 6: 25022026481664.000000 Minibatch loss at step 7: 651425419900819079168.000000 Minibatch loss at step 8: 21374465836947504345731163114962944.000000 Minibatch loss at step 9: nan Minibatch loss at step 10: nan

我的代码几乎与它类似,但我仍然得到NaN。

对于我在这里做错了什么建议?

1 个答案:

答案 0 :(得分:4)

这是因为Relu激活功能会导致爆炸梯度。因此,您需要相应地降低学习率(在您的情况下是 starter_learning_rate )。此外,您还可以尝试不同的激活功能。

在这里,(In simple multi-layer FFNN only ReLU activation function doesn't converge)与您的情况类似。按照答案,你会理解。

希望这有帮助。