Tensorflow:CNN完全没有学习

时间:2017-04-04 10:08:14

标签: python tensorflow conv-neural-network

我刚创建了自己的CNN,它从磁盘读取数据并尝试学习。 但是权重似乎根本没有学习,它们都是随机的。

这种偏见只会有所改变。我已经尝试使用灰度图像,但没有成功。我也厌倦了将我的数据集减少到只有2个类,在我看来应该有用。但测量的准确度低于50%(也许我正在计算准确度假)

以下是一些代码:

x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, classes])
weights = {
    'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
    'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
    'wd1': tf.Variable(tf.random_normal([12*12*64, 1024])),
    'out': tf.Variable(tf.random_normal([1024, classes]))
}
biases = {
    'bc1': tf.Variable(tf.random_normal([32])),
    'bc2': tf.Variable(tf.random_normal([64])),
    'bd1': tf.Variable(tf.random_normal([1024])),
    'out': tf.Variable(tf.random_normal([classes]))
}

pred = model.conv_net(x, weights, biases, keep_prob, imgSize)

with tf.name_scope("cost"):
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
with tf.name_scope("optimizer"):
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
with tf.name_scope("accuracy"):
    correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    while(step < epochs):
         batch_x, batch_y = batch_creator(batch_size, train_x.shape[0], 'train')
         sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})
          if(step % display_step == 0):
                 batchv_x, batchv_y = batch_creator(batch_size, val_x.shape[0], 'val')
                 summary, loss, acc = sess.run([merged, cost,  accuracy], feed_dict={x: batchv_x, y: batchv_y})
                 train_writer.add_summary(summary, step)

我查看了创建的批次,看起来很好。 batch_x是一个长度为2304个浮点值的数组,表示48x48图像 batch_y是一个带有one_hot标签的数组:[0 0 ... 0 1 0 ... 0 0]

这是我的模特:

def conv2d(x, W, b, strides=1):
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)

def maxpool2d(x, k=2):
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                          padding='SAME')

def conv_net(x, weights, biases, dropout, imgSize):
    with tf.name_scope("Reshaping_data") as scope:
        x = tf.reshape(x, shape=[-1, imgSize, imgSize, 1], name="inp") #(?, 48, 48, 1)

    with tf.name_scope("Conv1") as scope:
        conv1 = conv2d(x, weights['wc1'], biases['bc1'])
        conv1 = maxpool2d(conv1, k=2) #(?, 24, 24, 32)

    with tf.name_scope("Conv2") as scope:
        conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
        conv2 = maxpool2d(conv2, k=2) #(?, 12, 12, 64)

    with tf.name_scope("FC1") as scope:
        fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]]) #(?, 9216)
        fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1']) #(?, 1024)
        fc1 = tf.nn.relu(fc1) #(?, 1024)

    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'], name="out") #(?, 43)
    return out

感谢您的帮助!

PS:这是第二个卷积层的某些过滤器看起来像(后面的多少个时代并不重要)

conv2filter

1 个答案:

答案 0 :(得分:1)

我已经使用cifar-10数据库尝试了您的网络。

我担心问题是由巨大的参数引起的,特别是在fc1层。您可以尝试在卷积层中减少内核数(例如除以2),并在池中使用4或6作为k以减少空间维度。然后你可以减少很多fc1层的权重。

当参数很多时,请注意重量初始化。使用tf.contrib.layers.xavier_initializer()tf.random_normal_initializer(stddev=np.sqrt(2.0 / n))可以更好地启动。

在减少参数并更好地初始化权重之后,cifar-10上的损失开始收敛。您可以使用自己的数据库进行尝试。