每个小批量的重复损失值和降低的准确度

时间:2017-09-22 00:12:50

标签: python-2.7 tensorflow neural-network conv-neural-network

以下是我的多模式架构:

构建图

def deepnn(x_image, joint):
    '''x - input [none, 224*224*3]
    returns y([none, 10], scalar keep_prob'''

    with tf.name_scope('reshape'):
        x_image = tf.reshape(x_image,[-1,224,224,1])
        joint = tf.reshape(joint,[-1,8500])

    with tf.name_scope('conv1'):
        W_conv1 = weight_variable([5,5,1,32])
        b_conv1 = bias_variable([32])
        h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+ b_conv1)
        s1 = tf.summary.histogram("h_conv1", h_conv1)

    # Pooling layer - downsamples by 2X.
    with tf.name_scope('pool1'):
        h_pool1 = max_pool_2x2(h_conv1)

    with tf.name_scope('conv2'):
        W_conv2 = weight_variable([5, 5, 32, 64])
        b_conv2 = bias_variable([64])
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        s2 = tf.summary.histogram("h_conv2", h_conv2)

    # Second pooling layer.
    with tf.name_scope('pool2'):
        h_pool2 = max_pool_2x2(h_conv2)

    #FC layer-Image
    with tf.name_scope('fc1'):
        W_fc1 = weight_variable([56 * 56 * 64, 1024])
        b_fc1 = bias_variable([1024])

        h_pool2_flat=tf.reshape(h_pool2,[-1,56*56*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1)
        s3 = tf.summary.histogram("h_fc1",h_fc1)
   #Dropout
    with tf.name_scope('dropout'):
        keep_prob=tf.placeholder(tf.float32)
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
        s4 = tf.summary.histogram("h_fc1_drop",h_fc1_drop)

    #FC2
    with tf.name_scope('fc2'):
        W_fc2 = weight_variable([1024,100])
        b_fc2 = bias_variable([100])
        h_fc2 = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

    #FC layer - joints
    with tf.name_scope('conv1_joints'):
        W_j1 = weight_variable([5,1,2])
        joints_reshape = tf.reshape(joint,[-1,8500,1])
        h_j1 = tf.nn.conv1d(joints_reshape, W_j1, stride=1, padding='SAME')
        h_sig_j1 = tf.sigmoid(h_j1)
        h_j1_reshape = tf.reshape(h_sig_j1,[-1,1,8500,2])
        h_j1_pool = tf.nn.max_pool(h_j1_reshape, ksize=[1,1,2,1], strides=[1,1,2,1] ,padding="SAME",)
        h_j1_reshape2 = tf.reshape(h_j1_pool,[-1,4250,2])

    with tf.name_scope('conv2_joints'):
        W_j2 = weight_variable([10,2,1])
        h_j2 = tf.nn.conv1d(h_j1_reshape2, W_j2, stride=1, padding='SAME')
        h_sig_j2 = tf.sigmoid(h_j2)
        h_j2_reshape = tf.reshape(h_sig_j2,[-1,4250*1]) #output channels=1

    with tf.name_scope('fc_joints'):
        dense_1 = tf.layers.dense(inputs=h_j2_reshape, units=2048, activation=tf.nn.sigmoid)
        dense_2 = tf.layers.dense(inputs=dense_1, units=2048, activation=tf.nn.sigmoid)

    #Final Fully Connected
    with tf.name_scope('fc2_fc_j'):
        concat_layer = tf.concat((h_fc2, dense_2),axis=1)
        y_conv = tf.layers.dense(inputs=concat_layer, units=7,activation=tf.nn.tanh)
        #s5 = tf.summary.histogram("y_conv", y_conv)

    return y_conv, keep_prob

我正在使用tf.conct来连接我的两个网络。我正在使用AdamOptimizer,学习率为1e-4。但我有反复出现的损失值,而且准确性似乎在逐渐降低。我试过改变各种学习率。我在下面附上了我的网络图。我知道,如果我的损失是不变的,那么我的模型就是分歧。但是这里的价值反复出现,我不确定是什么意思。

我怀疑我的第二个网络tf.name_scope('conv1_joints'),我猜它没有学习任何东西network architecturemodel loss and accuracy

    Output:
COUNT = 80 
Iter 25280, Minibatch Loss= 1.984576, Training Accuracy= 0.25000
COUNT = 160 
Iter 25360, Minibatch Loss= 1.484576, Training Accuracy= 0.25000
COUNT = 240 
Iter 25440, Minibatch Loss= 1.984576, Training Accuracy= 0.50000
COUNT = 320 
Iter 25520, Minibatch Loss= 2.484576, Training Accuracy= 0.00000
COUNT = 400 
Iter 25600, Minibatch Loss= 1.984576, Training Accuracy= 0.00000
COUNT = 480 
Iter 25680, Minibatch Loss= 2.984576, Training Accuracy= 0.00000
COUNT = 560 
Iter 25760, Minibatch Loss= 2.984576, Training Accuracy= 0.00000
COUNT = 640 
Iter 25840, Minibatch Loss= 2.984576, Training Accuracy= 0.00000
COUNT = 720 
Iter 25920, Minibatch Loss= 2.484576, Training Accuracy= 0.00000
COUNT = 800 
Iter 26000, Minibatch Loss= 1.984576, Training Accuracy= 0.00000
  1. 为什么我有经常性损失值
  2. 我应该进一步修改
  3. 更新 我的初始化函数:

    def weight_variable(shape):
        initial = tf.truncated_normal(shape, stddev=0.1)
        return tf.Variable(initial)
    
    def bias_variable(shape):
        initial = tf.constant(0.1,shape=shape)
        return initial
    

    更新2:(设置lr=0.1batch_size=50

    Iter 500, Minibatch Loss= 2.401168, Training Accuracy= 0.16000
    --------------------------RESHUFFLING--------------------
    COUNT = 100 
    Iter 1000, Minibatch Loss= 2.294569, Training Accuracy= 0.10000
    COUNT = 600 
    Iter 1500, Minibatch Loss= 2.622372, Training Accuracy= 0.08000
    --------------------------RESHUFFLING--------------------
    COUNT = 200 
    Iter 2000, Minibatch Loss= 2.481168, Training Accuracy= 0.16000
    COUNT = 700 
    Iter 2500, Minibatch Loss= 2.488970, Training Accuracy= 0.06000
    --------------------------RESHUFFLING--------------------
    COUNT = 300 
    Iter 3000, Minibatch Loss= 2.416773, Training Accuracy= 0.12000
    COUNT = 800 
    Iter 3500, Minibatch Loss= 2.521168, Training Accuracy= 0.06000
    --------------------------RESHUFFLING--------------------
    COUNT = 400 
    Iter 4000, Minibatch Loss= 2.288970, Training Accuracy= 0.12000
    --------------------------RESHUFFLING--------------------
    COUNT = 500 
    Iter 5000, Minibatch Loss= 2.384576, Training Accuracy= 0.16000
    --------------------------RESHUFFLING--------------------
    COUNT = 100 
    Iter 5500, Minibatch Loss= 2.361168, Training Accuracy= 0.10000
    COUNT = 600 
    Iter 6000, Minibatch Loss= 2.208971, Training Accuracy= 0.22000
    --------------------------RESHUFFLING--------------------
    COUNT = 200 
    Iter 6500, Minibatch Loss= 2.608970, Training Accuracy= 0.08000
    COUNT = 700 
    Iter 7000, Minibatch Loss= 2.470175, Training Accuracy= 0.12000
    --------------------------RESHUFFLING--------------------
    COUNT = 300 
    Iter 7500, Minibatch Loss= 2.475773, Training Accuracy= 0.10000
    COUNT = 800 
    Iter 8000, Minibatch Loss= 2.374569, Training Accuracy= 0.18000
    --------------------------RESHUFFLING--------------------
    COUNT = 400 
    Iter 8500, Minibatch Loss= 2.384576, Training Accuracy= 0.14000
    --------------------------RESHUFFLING--------------------
    COUNT = 500 
    Iter 9500, Minibatch Loss= 2.582372, Training Accuracy= 0.10000
    

0 个答案:

没有答案