MLP(ReLu)在几次迭代后停止学习。张量流

时间:2017-12-15 06:07:30

标签: python machine-learning tensorflow neural-network

2层MLP(Relu)+ Softmax

经过20次迭代后,Tensor Flow只会放弃并停止更新任何权重或偏差。

我最初认为我的ReLu在哪里死亡,所以我显示直方图以确保它们都不是0.而且它们都没有!

他们只是在几次迭代后停止变化,并且交叉熵仍然很高。 ReLu,Sigmoid和tanh给出了相同的结果。将GradientDescentOptimizer从0.01调整到0.5也没有太大变化。

某处必须有一个错误。就像我代码中的实际错误一样。我甚至无法过度填充一小部分样品!

这是我的直方图,这是我的代码,如果有人可以查看,那将是一个重要的帮助。

我们有3000个标量,其中6个值介于0到255之间 分为两类:[1,0]或[0,1] (我确保将订单随机化)

        def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
        with tf.name_scope(layer_name):
            weights = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=1.0 / math.sqrt(float(6))))
            tf.summary.histogram('weights', weights)

            biases = tf.Variable(tf.constant(0.4, shape=[output_dim]))
            tf.summary.histogram('biases', biases)

            preactivate = tf.matmul(input_tensor, weights) + biases
            tf.summary.histogram('pre_activations', preactivate)

            #act=tf.nn.relu
            activations = act(preactivate, name='activation')
            tf.summary.histogram('activations', activations)

            return activations


    #We have 3000 scalars with 6 values between 0 and 255 to classify in two classes
    x = tf.placeholder(tf.float32, [None, 6])
    y = tf.placeholder(tf.float32, [None, 2])

    #After normalisation, input is between 0 and 1
    normalised = tf.scalar_mul(1/255,x)

    #Two layers
    hidden1 = nn_layer(normalised, 6, 4, "hidden1")
    hidden2 = nn_layer(hidden1, 4, 2, "hidden2")

    #Finish by a softmax
    softmax = tf.nn.softmax(hidden2)

    #Defining loss, accuracy etc..
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=softmax))      
    tf.summary.scalar('cross_entropy', cross_entropy)

    correct_prediction = tf.equal(tf.argmax(softmax, 1), tf.argmax(y, 1))

    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
    tf.summary.scalar('accuracy', accuracy)

    #Init session and writers and misc
    session = tf.Session()

    train_writer = tf.summary.FileWriter('log', session.graph)
    train_writer.add_graph(session.graph)

    init= tf.global_variables_initializer()
    session.run(init)

    merged = tf.summary.merge_all()

    #Train
    train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)

    batch_x, batch_y = self.trainData
    for _ in range(1000):
        session.run(train_step, {x: batch_x, y: batch_y})
        #Every 10 steps, add to the summary
        if _ % 10 == 0: 
            s = session.run(merged, {x: batch_x, y: batch_y})
            train_writer.add_summary(s, _)


    #Evaluate
    evaluate_x, evaluate_y = self.evaluateData
    print(session.run(accuracy, {x: batch_x, y: batch_y}))
    print(session.run(accuracy, {x: evaluate_x, y: evaluate_y}))

隐藏层1.输出不为零,因此这不是一个垂死的ReLu问题。但仍然,权重是不变的! TF甚至没有尝试修改它们

Hidden Layer 1. Output isn't zero, so that's not a dying ReLu problem. but still, weights are constant ! TF didn't even try to modify them

相同的隐藏层2. TF尝试稍微调整它们并放弃了相当快。

Same for Hidden Layer 2. TF tried tweaking them a bit and gave up pretty fast.

交叉熵确实减少了,但仍然高得惊人。

Cross entropy does decrease, but stays staggeringly high.

编辑: 我的代码中有很多错误。 第一个是python中的1/255 = 0 ...将其更改为1.0 / 255.0并且我的代码开始生效。

所以基本上,我的输入乘以0,神经网络纯粹是盲目的。所以他试图在失明的同时获得最好的结果然后放弃。这完全解释了它的反应。

现在我正在申请两次softmax ...修改它也有帮助。 通过尝试不同的学习率和不同的时代数量,我终于找到了一些好的东西。

这是最终的工作代码:

    def runModel(self):


    def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
        with tf.name_scope(layer_name):

            #This is standard weight for neural networks with ReLu.
            #I divide by math.sqrt(float(6)) because my input has 6 values
            weights = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=1.0 / math.sqrt(float(6))))
            tf.summary.histogram('weights', weights)

            #I chose this bias myself. It work. Not sure why.
            biases = tf.Variable(tf.constant(0.4, shape=[output_dim]))
            tf.summary.histogram('biases', biases)

            preactivate = tf.matmul(input_tensor, weights) + biases
            tf.summary.histogram('pre_activations', preactivate)

            #Some neurons will have ReLu as activation function
            #Some won't have any activation functions
            if act == "None":
                activations = preactivate
            else :
                activations = act(preactivate, name='activation')
                tf.summary.histogram('activations', activations)

            return activations


    #We have 3000 scalars with 6 values between 0 and 255 to classify in two classes
    x = tf.placeholder(tf.float32, [None, 6])
    y = tf.placeholder(tf.float32, [None, 2])

    #After normalisation, input is between 0 and 1
    #Normalising input really helps. Nothing is doable without it
    #But my ERROR was to write 1/255. Becase in python
    #1/255 = 0 .... (integer division)
    #But 1.0/255.0 = 0,003921568 (float division)
    normalised = tf.scalar_mul(1.0/255.0,x)

    #Three layers total. The first one is just a matrix multiplication
    input = nn_layer(normalised, 6, 4, "input", act="None")
    #The second one has a ReLu after a matrix multiplication
    hidden1 = nn_layer(input, 4, 4, "hidden", act=tf.nn.relu)
    #The last one is also jsut a matrix multiplcation
    #WARNING ! No softmax here ! Because later we call a function
    #That implicitly does a softmax
    #And it's bad practice to do two softmax one after the other
    output = nn_layer(hidden1, 4, 2, "output", act="None")

    #Tried different learning rates
    #Higher learning rate means find a result faster
    #But could be a local minimum
    #Lower learning rate means we need much more epochs
    learning_rate = 0.03

    with tf.name_scope('learning_rate_'+str(learning_rate)):
        #Defining loss, accuracy etc..
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output))      
        tf.summary.scalar('cross_entropy', cross_entropy)

        correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1))

        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
        tf.summary.scalar('accuracy', accuracy)

    #Init session and writers and misc
    session = tf.Session()

    train_writer = tf.summary.FileWriter('log', session.graph)
    train_writer.add_graph(session.graph)

    init= tf.global_variables_initializer()
    session.run(init)

    merged = tf.summary.merge_all()

    #Train
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)

    batch_x, batch_y = self.trainData
    for _ in range(1000):
        session.run(train_step, {x: batch_x, y: batch_y})
        #Every 10 steps, add to the summary
        if _ % 10 == 0: 
            s = session.run(merged, {x: batch_x, y: batch_y})
            train_writer.add_summary(s, _)


    #Evaluate
    evaluate_x, evaluate_y = self.evaluateData
    print(session.run(accuracy, {x: batch_x, y: batch_y}))
    print(session.run(accuracy, {x: evaluate_x, y: evaluate_y}))

Final results after fixing

2 个答案:

答案 0 :(得分:4)

我担心你必须降低你的学习率。它很高。高学习率通常会使您达到当地最低水平而非全球学习。

尝试0.001,0.0001甚至0.00001。或者让您的学习速度灵活。

我没有检查代码,所以首先尝试调整LR。

答案 1 :(得分:0)

以防万一将来有人需要它:

我已经用np.random.randn初始化了双层网络的层,但是该网络拒绝学习。完全使用He(对于ReLU)和Xavier(对于softmax)初始化是可行的。