当我将dropout添加到tensorflow NN时,测试精度会变差

时间:2017-08-10 04:00:52

标签: tensorflow neural-network

您好我正在尝试将dropout添加到我的tensorflow MNIST数据集分类器中。在使用dropout之前,我的测试精度约为95%。当我添加辍学时,即使成本降低,也会达到10%左右(正常的猜测几率)。当我将两个dropout值都更改为1时,它确实有效。但是当我将第一个辍学值改为小于1时,它就会停止学习。我做错了什么?

import tensorflow as tf
import math
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)

n_nodes_hl1 = 500
n_nodes_hl2 = 300
n_nodes_hl3 = 100
n_classes = 10
batch_size = 100

x = tf.placeholder('float', [None, 784]) #placeholder as input
y = tf.placeholder('float') #placeholder as output
lr = tf.placeholder(tf.float32)
pkeep = tf.placeholder(tf.float32)

max_learning_rate = 0.003
min_learning_rate = 0.0001
decay_speed = 2000.0 # 0.003-0.0001-2000=>0.9826 done in 5000 iterations

def neural_network_model(data):
    hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])),
                      'biases':tf.Variable(tf.ones([n_nodes_hl1]))}

    hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                      'biases':tf.Variable(tf.ones([n_nodes_hl2]))}

    hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
                      'biases':tf.Variable(tf.ones([n_nodes_hl3]))}

    output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
                    'biases':tf.Variable(tf.ones([n_classes])),}



    l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases'])
    l1 = tf.nn.relu(l1) #uses relu function as activation function
    l1d = tf.nn.dropout(l1, pkeep)

    l2 = tf.add(tf.matmul(l1d,hidden_2_layer['weights']), hidden_2_layer['biases'])
    l2 = tf.nn.relu(l2)
    l2d = tf.nn.dropout(l2, pkeep)

    l3 = tf.add(tf.matmul(l2d,hidden_3_layer['weights']), hidden_3_layer['biases'])
    l3 = tf.nn.relu(l3)
    l3d = tf.nn.dropout(l3, pkeep)

    output = tf.matmul(l3d,output_layer['weights']) + output_layer['biases']
    return output

def train_neural_network(x):
    prediction = neural_network_model(x)
    cost = tf.reduce_mean( 
    tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) )
    optimizer = tf.train.AdamOptimizer(lr).minimize(cost)

    hm_epochs = 9
    with tf.Session() as sess: #the with statement has something to do with code cleanup when done
        sess.run(tf.global_variables_initializer())

        for epoch in range(hm_epochs):
            epoch_loss = 0
            for i in range(int(mnist.train.num_examples/batch_size)):
                epoch_x, epoch_y = mnist.train.next_batch(batch_size) #the epochx is the pics and the epochy is the labels
                # learning rate decay
                learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
                _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y, lr: learning_rate, pkeep: .5})
                epoch_loss += c

            print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)

            correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
            accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
            print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels, pkeep: 1.0}))

train_neural_network(x)

输出是:

Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
Epoch 1 completed out of 9 loss: 1047962.8098
Accuracy: 0.1043
Epoch 2 completed out of 9 loss: 12696.1747944
Accuracy: 0.1135
Epoch 3 completed out of 9 loss: 4314.31121588
Accuracy: 0.1135
Epoch 4 completed out of 9 loss: 2532.65784574
Accuracy: 0.1135
Epoch 5 completed out of 9 loss: 2025.4943831
Accuracy: 0.1135
Epoch 6 completed out of 9 loss: 1732.2871151
Accuracy: 0.1135
Epoch 7 completed out of 9 loss: 1656.68535948
Accuracy: 0.1135
Epoch 8 completed out of 9 loss: 1512.10817194
Accuracy: 0.1135
Epoch 9 completed out of 9 loss: 1463.4490006
Accuracy: 0.1135

2 个答案:

答案 0 :(得分:0)

据我所知,问题在于它是一个浅层网络并且使用了第一层的丢失,这意味着即使是非常基本的特征,它们构成了在网络中更深层次学习更复杂特征的基础,也不是传播到更深层。例如,在深度CNN中辍学的原因在于,很多卷积层已经开发出非常低级别的输入表示,因此,当您将辍学应用于完全连接的层时,您尝试确保没有一个较低级别的特征被赋予过分重要性,简而言之就是没有过度拟合。

所以,简而言之,我建议不要在最初的两层中有一个辍学,准确性应该更好。

答案 1 :(得分:0)

我遇到了类似的问题,并发现当你为更复杂的东西建模时,你必须更仔细地选择初始化参数。我建议你初始化这样的变量:

public class Answer
{
    public string text {get;set;}
    public bool isCorrect {get;set;}
}