我尝试修改code中的Convolutional Neural Network TensorFlow Tutorial以获取每个测试图像中每个类的单个概率。
我可以使用tf.nn.in_top_k
的替代方案吗?因为此方法只返回一个布尔张量。但我想保留个人价值观。
我使用Tensorflow 1.4和Python 3.5,我认为第62-82行和第121-129 / 142行可能是要修改的行。有人对我有暗示吗?
第62-82行:
def eval_once(saver, summary_writer, top_k_op, summary_op):
"""Run Eval once.
Args:
saver: Saver.
summary_writer: Summary writer.
top_k_op: Top K op.
summary_op: Summary op.
"""
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
# Assuming model_checkpoint_path looks something like:
# /my-favorite-path/cifar10_train/model.ckpt-0,
# extract global_step from it.
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
else:
print('No checkpoint file found')
return
第121-129行+ 142
[....]
images, labels = cifar10.inputs(eval_data=eval_data)
# Build a Graph that computes the logits predictions from the
# inference model.
logits = cifar10.inference(images)
# Calculate predictions.
top_k_op = tf.nn.in_top_k(logits, labels, 1)
[....]
答案 0 :(得分:2)
您可以从原始logits
计算类概率:
# The vector of probabilities per each example in a batch
prediction = tf.nn.softmax(logits)
作为奖励,以下是如何获得准确的准确度:
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))