在Tensorflow中实施Resnet的准确性并不理想

时间:2017-05-09 09:56:22

标签: machine-learning neural-network computer-vision deep-learning resnet

我是深度学习的初学者,最近我尝试实现34层残余神经网络。我使用CIFAR-10图像训练神经网络,但测试精度没有预期的那么高,大约65%,如下面的屏幕截图所示。

Testing accuracy

基本上,我实现剩余块的方式如下:

对于没有尺寸增加的残余块,它如下例所示:

"""
    Convolution Layers 1, Sub Unit 1
"""

conv_weights_1_2 = tf.Variable(tf.random_normal([3,3,64,64]),dtype=tf.float32)
conv_1_2 = tf.nn.conv2d(conv_1_1, conv_weights_1_2, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_1_2.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_1_2, axis)

beta = tf.Variable(tf.zeros(conv_1_2.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_1_2.get_shape()[-1:]),dtype=tf.float32)

conv_1_2 = tf.nn.batch_normalization(conv_1_2, mean, variance, beta, gamma, 0.001)

conv_1_2 = tf.nn.relu(conv_1_2)

conv_weights_1_3 = tf.Variable(tf.random_normal([3,3,64,64]),dtype=tf.float32)
conv_1_3 = tf.nn.conv2d(conv_1_2, conv_weights_1_3, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_1_3.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_1_3, axis)

beta = tf.Variable(tf.zeros(conv_1_3.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_1_3.get_shape()[-1:]),dtype=tf.float32)

conv_1_3 = tf.nn.batch_normalization(conv_1_3, mean, variance, beta, gamma, 0.001)

conv_1_3 = conv_1_3 + conv_1_1

conv_1_3 = tf.nn.relu(conv_1_3)

对于尺寸增加的块,它如下:

"""
    Convolution Layers 3 starts here.
    Convolution Layers 3, Sub Unit 0
"""

conv_weights_3_0 = tf.Variable(tf.random_normal([3,3,128,256]),dtype=tf.float32)
conv_3_0 = tf.nn.conv2d(conv_2_out, conv_weights_3_0, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_0.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_0, axis)

beta = tf.Variable(tf.zeros(conv_3_0.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_0.get_shape()[-1:]),dtype=tf.float32)

conv_3_0 = tf.nn.batch_normalization(conv_3_0, mean, variance, beta, gamma, 0.001)

conv_3_0 = tf.nn.relu(conv_3_0)

conv_weights_3_1 = tf.Variable(tf.random_normal([3,3,256,256]),dtype=tf.float32)
conv_3_1 = tf.nn.conv2d(conv_3_0, conv_weights_3_1, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_3_1.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_1, axis)

beta = tf.Variable(tf.zeros(conv_3_1.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_1.get_shape()[-1:]),dtype=tf.float32)

conv_3_1 = tf.nn.batch_normalization(conv_3_1, mean, variance, beta, gamma, 0.001)

conv_weights_3_pre = tf.Variable(tf.ones([1,1,128,256]),dtype=tf.float32,trainable=False)
conv_3_pre = tf.nn.conv2d(conv_2_out, conv_weights_3_pre, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_pre.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_pre, axis)

conv_3_pre = tf.nn.batch_normalization(conv_3_pre, mean, variance, None, None, 0.001)

conv_3_1 = conv_3_1 + conv_3_pre

conv_3_1 = tf.nn.relu(conv_3_1)

我使用AdamOptimizer训练了来自CIFAR-10的所有50000个训练图像,学习率为0.001,并使用这10000个测试图像进​​行测试。在图中,培训几乎是1000个时期,每个时期有500个批次(每批100个图像)。在每个时代之前,我洗了所有50000个训练图像。同样,在很长一段时间内,测试精度几乎保持在65%左右。

完整的代码可以在https://github.com/freegyp/my-implementation-of-ResNet-in-Tensorflow找到。我的实施有什么问题吗?我期待着任何改进我的实施的建议。

0 个答案:

没有答案