TensorFlow Googlenet成立时效果不佳

时间:2017-07-31 16:24:21

标签: python tensorflow neural-network deep-learning conv-neural-network

我试图实现一个版本的Googlenet初始神经网络,但我使用MNIST data set获得了10%的准确率。这是令人担忧的,因为对于简单的神经网络,我应该获得该数据集的97 +%的准确度。所以我有信心我没有正确实现初始神经网络。我在下面提供了我的代码。

The inception neural network that I am following

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf

x  = tf.placeholder(dtype = tf.float32, shape = [None,784])
y_ = tf.placeholder(dtype = tf.float32, shape = [None,10])

x_input = tf.reshape(x,[-1,28,28,1])


# 1x1 Convolution
W1x1 = tf.Variable(tf.random_normal([1,1,1,1]))
b1x1 = tf.Variable(tf.random_normal([1]))
output1x1 = tf.add(tf.nn.conv2d(x_input,W1x1, strides = [1,1,1,1], padding = 'SAME'),b1x1)
output1x1 = tf.nn.relu(output1x1)


# 5x5 Convolution
W5x5 = tf.Variable(tf.random_normal([1,1,1,1]))
b5x5 = tf.Variable(tf.random_normal([1]))
output5x5 = tf.add(tf.nn.conv2d(output1x1,W5x5, strides = [1,1,1,1], padding = 'SAME'),b5x5)
output5x5 = tf.nn.relu(output5x5)


# 3x3 Convolution
W3x3 = tf.Variable(tf.random_normal([1,1,1,1]))
b3x3 = tf.Variable(tf.random_normal([1]))
output3x3 = tf.add(tf.nn.conv2d(output1x1,W3x3, strides = [1,1,1,1], padding = 'SAME'),b3x3)
output3x3 = tf.nn.relu(output3x3)


# AveragePooling followed by 1x1 convolution
outputPool = tf.nn.avg_pool(output1x1, ksize = [1,2,2,1], strides = [1,1,1,1], padding = "SAME")
Wo1x1 = tf.Variable(tf.random_normal([1,1,1,1]))
bo1x1 = tf.Variable(tf.random_normal([1]))
outputo1x1 = tf.add(tf.nn.conv2d(outputPool,Wo1x1, strides = [1,1,1,1], padding = 'SAME'),bo1x1)
outputo1x1 = tf.nn.relu(outputo1x1)


# Concatonate the 4 convolution products
finalouput = tf.concat([output1x1, output5x5, output3x3, outputo1x1], 3)
finalouput = tf.reshape(finalouput, [-1, 7*7*64])

#Add a fully connected layer
W_fc = tf.Variable(tf.random_normal([7*7*64,1024]))
b_fc = tf.Variable(tf.random_normal([1024]))  
output_fc = tf.add(tf.matmul(finalouput,W_fc), b_fc )
output_fc = tf.nn.relu(output_fc)
output_fc = tf.nn.dropout(output_fc, keep_prob = 0.85)

#Final layer
W_final = tf.Variable(tf.random_normal([1024,10]))
b_final = tf.Variable(tf.random_normal([10]))
predictions = tf.add(tf.matmul(output_fc,W_final), b_final)


# Train the model
cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels = y_  ,logits = predictions))
optimiser = tf.train.AdamOptimizer(1e-3).minimize(cost)
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(20000):
        batch = mnist.train.next_batch(50)
        if i % 100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})
            print('step %d, training accuracy %g' % (i, train_accuracy))
        optimiser.run(feed_dict={x: batch[0], y_: batch[1]})
    print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels,}))

3 个答案:

答案 0 :(得分:2)

问题在于权重初始化。使用tf.random_normal()初始化的权重的标准差为1,这个数字很高,减少了解决问题的数量。

将权重初始化更改为:

W** = tf.Variable(tf.random_normal(..., stddev=0.01))
b** = tf.Variable(tf.random_normal(..., stddev=0.001))

答案 1 :(得分:0)

你的模特很阴影。 GoogLeNet有22层。

我不建议您自己实现图层,因为它容易出错。最好使用张量流层抽象。您可能还想查看或使用现有实现,例如here

答案 2 :(得分:0)

也许尝试以不同的顺序连接它?