我刚刚开始学习张力流,并写了一个在MNIST上锻炼的模型。因此我正在读一本书,但仍然有问题,你能帮我解决这个问题吗?
以下是我的代码,其中包含问题描述,非常感谢!
x = tf.placeholder(tf.float32,[None,INPUT_NODE],name='input')
y_ = tf.placeholder(tf.float32,[None,OUTPUT_NODE],name='output')
weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE,LAYER1_NODE],stddev=0.1))
biases1 = tf.Variable(tf.constant(0.1,shape=[LAYER1_NODE]))
weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE,OUTPUT_NODE],stddev=0.1))
biases2 = tf.Variable(tf.constant(0.1,shape=[OUTPUT_NODE]))
下一个y =()...定义前向传播而不使用移动平均模型。
y = inference(x,None,weights1,biases1,weights2,biases2)
global_step = tf.Variable(0,trainable=False)
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY,global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
下一个average_y =()...使用移动平均模型定义前向传播。
average_y = inference(x,variable_averages,weights1,biases1,weights2,biases2)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.arg_max(y_,1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
regularization = regularizer(variable_averages.average(weights1)) +\
regularizer(variable_averages.average(weights2))
loss = cross_entropy_mean + regularization
learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
mnist.train.num_examples / BATCH_SIZE,
LEARNING_RATE_DECAY
)
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)
train_op = tf.group(train_step,variables_averages_op)
问题是当我使用average_y计算准确度时,似乎训练根本无法改善:
在0个训练步骤之后,acc in validatation为0.0742
在1000个训练步骤之后,acc in validatation为0.0924
在2000个训练步骤之后,acc in validatation为0.0924
当我使用y而不是average_y时,一切都很好。这真让我困惑:
在0个训练步骤之后,acc in validatation为0.0686
经过1000次训练后,acc的验证为0.9716
经过2000次训练后,acc的验证为0.9768
#correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
correct_prediction = tf.equal(tf.arg_max(average_y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
with tf.Session() as sess:
tf.initialize_all_variables().run()
validate_feed = {
x:mnist.validation.images,
y_:mnist.validation.labels
}
test_feed={
x:mnist.test.images,
y_:mnist.test.labels
}
for i in range(TRAINING_STEPS):
if i%1000 == 0:
validate_acc = sess.run(accuracy,feed_dict=validate_feed)
print("After %d training steps, acc in validatation is %g"%(i,validate_acc))
xs,ys = mnist.train.next_batch(BATCH_SIZE)
sess.run([train_op,global_step],feed_dict={x:xs,y_:ys})
test_acc = sess.run(accuracy,feed_dict=test_feed)
print("After %d training steps, acc in test is %g" % (TRAINING_STEPS, test_acc))
答案 0 :(得分:0)
在您的代码段中,您正在训练与y
对数而非average_y
相关的分类丢失,因此具有指数移动平均线的推理图实际上未经过培训
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.arg_max(y_,1))