在Tensorflow中添加第二个隐藏层可以打破损失计算

时间:2018-02-12 18:29:51

标签: python-3.x tensorflow machine-learning loss

我正在完成Udacity深度学习课程的任务三。我有一个有一个隐藏层的工作神经网络。但是,当我添加第二个时,损失会导致nan

这是图表代码:

num_nodes_layer_1 = 1024
num_nodes_layer_2 = 128
num_inputs = 28 * 28
num_labels = 10
batch_size = 128

graph = tf.Graph()
with graph.as_default():

    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, num_inputs))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)

    # variables
    # hidden layer 1
    hidden_weights_1 = tf.Variable(tf.truncated_normal([num_inputs, num_nodes_layer_1]))
    hidden_biases_1 = tf.Variable(tf.zeros([num_nodes_layer_1]))

    # hidden layer 2
    hidden_weights_2 = tf.Variable(tf.truncated_normal([num_nodes_layer_1, num_nodes_layer_2]))
    hidden_biases_2 = tf.Variable(tf.zeros([num_nodes_layer_2]))

    # linear layer
    weights = tf.Variable(tf.truncated_normal([num_nodes_layer_2, num_labels]))
    biases = tf.Variable(tf.zeros([num_labels]))

    # Training computation.
    y1 = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights_1) + hidden_biases_1)
    y2 = tf.nn.relu(tf.matmul(y1, hidden_weights_2) + hidden_biases_2)
    logits = tf.matmul(y2, weights) + biases

    # Calc loss
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits))

    # Optimizer.
    # We are going to find the minimum of this loss using gradient descent.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

    # Predictions for the training, validation, and test data.
    # These are not part of training, but merely here so that we can report
    # accuracy figures as we train.
    train_prediction = tf.nn.softmax(logits)

    y1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, hidden_weights_1) + hidden_biases_1)
    y2_valid = tf.nn.relu(tf.matmul(y1_valid, hidden_weights_2) + hidden_biases_2)
    valid_prediction = tf.nn.softmax(tf.matmul(y2_valid, weights) + biases)

    y1_test = tf.nn.relu(tf.matmul(tf_test_dataset, hidden_weights_1) + hidden_biases_1)
    y2_test = tf.nn.relu(tf.matmul(y1_test, hidden_weights_2) + hidden_biases_2)
    test_prediction = tf.nn.softmax(tf.matmul(y2_test, weights) + biases)

它不会出错。但在第一次之后,损失无法打印,也无法学习。

Initialized
Minibatch loss at step 0: 2133.468750
Minibatch accuracy: 8.6%
Validation accuracy: 10.0%
Minibatch loss at step 400: nan
Minibatch accuracy: 9.4%
Validation accuracy: 10.0%
Minibatch loss at step 800: nan
Minibatch accuracy: 11.7%
Validation accuracy: 10.0%
Minibatch loss at step 1200: nan
Minibatch accuracy: 4.7%
Validation accuracy: 10.0%
Minibatch loss at step 1600: nan
Minibatch accuracy: 7.8%
Validation accuracy: 10.0%
Minibatch loss at step 2000: nan
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Test accuracy: 10.0%

当我移除第二层时,它会训练,我的准确度大约为85%。对于第二层,我怀疑得分在80%到90%之间。

我使用了错误的优化器吗?这只是我错过的蠢事吗?

这是会话代码:

num_steps = 2001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    print("Initialized")
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset + batch_size), :]
        batch_labels = train_labels[offset:(offset + batch_size), :]
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {
            tf_train_dataset : batch_data, 
            tf_train_labels : batch_labels,
        }
        _, l, predictions = session.run(
          [optimizer, loss, train_prediction], feed_dict=feed_dict)
        if (step % 400 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
    acc = accuracy(test_prediction.eval(), test_labels)
    print("Test accuracy: %.1f%%" % acc)

1 个答案:

答案 0 :(得分:3)

0.5的学习率过高,将其设为0.05并且会收敛。

Minibatch loss at step 0: 1506.469238
Minibatch loss at step 400: 7796.088867
Minibatch loss at step 800: 9893.363281
Minibatch loss at step 1200: 5089.553711
Minibatch loss at step 1600: 6148.481445
Minibatch loss at step 2000: 5257.598145
Minibatch loss at step 2400: 1716.116455
Minibatch loss at step 2800: 1600.826538
Minibatch loss at step 3200: 941.884766
Minibatch loss at step 3600: 1033.936768
Minibatch loss at step 4000: 1808.775757
Minibatch loss at step 4400: 113.909866
Minibatch loss at step 4800: 49.800560
Minibatch loss at step 5200: 20.392700
Minibatch loss at step 5600: 6.253595
Minibatch loss at step 6000: 4.372780
Minibatch loss at step 6400: 6.862935
Minibatch loss at step 6800: 6.951239
Minibatch loss at step 7200: 3.528607
Minibatch loss at step 7600: 2.968611
Minibatch loss at step 8000: 3.164592
...
Minibatch loss at step 19200: 2.141401

还有几点指示:

  1. tf_train_datasettf_train_labels应为tf.placeholders形状[None, 784]None维度允许您在培训期间更改批量大小,而不是仅限于128等大小编号。

  2. 不要将tf_valid_datasettf_test_dataset用作tf.constant,只需在相应的feed_dict中传递验证和测试数据集,这样您就可以获得除去图表末尾的额外操作以确认和测试准确性。

  3. 我建议从一批单独的验证和测试数据中抽样,而不是在每次检查val /测试精度的迭代中使用相同批次的数据。