我一直在练习机器学习,而且我遇到了mnist教程。在学习的过程中,我制作了这段代码。
`导入tensorflow为tf 来自tensorflow.examples.tutorials.mnist import input_data 导入numpy为np
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
n_hidden_layer_1 = 500
n_hidden_layer_2 = 500
n_hidden_layer_3 = 500
n_classes = 10
batch_size = 100
x = tf.placeholder('float', shape = [None, 784])
y = tf.placeholder('float')
hidden_layer_1 = {
'weights': tf.Variable(tf.random_normal(shape = [784, n_hidden_layer_1])),
'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_1]))
}
hidden_layer_2 = {
'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_1, n_hidden_layer_2])),
'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_2]))
}
hidden_layer_3 = {
'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_2, n_hidden_layer_3])),
'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_3]))
}
output_layer = {
'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_3, n_classes])),
'bias': tf.Variable(tf.random_normal(shape = [n_classes]))
}
hidden_layer_1_output = tf.nn.relu(tf.add(tf.matmul(x, hidden_layer_1['weights']), hidden_layer_1['bias']))
hidden_layer_2_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_1_output, hidden_layer_2['weights']), hidden_layer_2['bias']))
hidden_layer_3_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_2_output, hidden_layer_3['weights']), hidden_layer_3['bias']))
final_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_3_output, output_layer['weights']), output_layer['bias']))
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=final_output, labels=y))
model = tf.train.AdamOptimizer().minimize(cost)
epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(epochs):
epoch_loss = 0
for _ in range(mnist.train.num_examples/batch_size):
P,Q = mnist.train.next_batch(batch_size)
_,c = sess.run([model, cost], feed_dict = {x:P, y:Q})
epoch_loss+=c
print("Epoch no:",i,"Epoch_loss:",epoch_loss)
correct = tf.equal(tf.argmax(final_output,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print("accuracy: ",accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))
生成的结果是
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
('Epoch no:', 0, 'Epoch_loss:', 265771.25100541115)
('Epoch no:', 1, 'Epoch_loss:', 1310.440309047699)
('Epoch no:', 2, 'Epoch_loss:', 1262.8069067001343)
('Epoch no:', 3, 'Epoch_loss:', 1262.8069069385529)
('Epoch no:', 4, 'Epoch_loss:', 1262.8069067001343)
('Epoch no:', 5, 'Epoch_loss:', 1262.8069069385529)
('Epoch no:', 6, 'Epoch_loss:', 1262.8069067001343)
('Epoch no:', 7, 'Epoch_loss:', 1262.8069067001343)
('Epoch no:', 8, 'Epoch_loss:', 1262.8069064617157)
('Epoch no:', 9, 'Epoch_loss:', 1262.8069064617157)
('accuracy: ', 0.1008)
您能否告诉我此代码中我的结果不准确的可能原因以及如何改进?
答案 0 :(得分:2)
您的代码存在以下几个问题:
删除final_output上的relu激活。 softmax_cross_entropy_with_logits将在final_output上应用softmax激活。
final_output = tf.add(tf.matmul(hidden_layer_3_output, output_layer['weights']), output_layer['bias'])
将权重的标准差设置为较低的值。
'weights': tf.Variable(tf.random_normal(shape = [784, n_hidden_layer_1], stddev=0.005))