TensorFlow:模型保存成功但恢复失败,我错在哪里?

时间:2017-04-30 13:49:05

标签: python tensorflow

我最近在学习TensorFlow,显然我是新手。但是我在这个问题上已经尝试了很多方法,我编写了这个代码来训练我的模型并希望直接恢复它,而不是在model.ckpt文件已经存在的情况下再次训练它。但是在训练之后,我的测试精度大约是90%,但是如果我直接恢复它的精度只有10%左右,我认为是因为我失败了恢复我的模型。我只有两个名为weightsbiases的变量,这是我的主要部分代码:

def train(bottleneck_tensor, jpeg_data_tensor):
image_lists = create_image_lists(TEST_PERCENTAGE, VALIDATION_PERCENTAGE)
n_classes = len(image_lists.keys())

# input
bottleneck_input = tf.placeholder(tf.float32, [None, BOTTLENECK_TENSOR_SIZE],
                                  name='BottleneckInputPlaceholder')
ground_truth_input = tf.placeholder(tf.float32, [None, n_classes], name='GroundTruthInput')

# this is the new_layer code
# with tf.name_scope('final_training_ops'):
#     weights = tf.Variable(tf.truncated_normal([BOTTLENECK_TENSOR_SIZE, n_classes], stddev=0.001))
#     biases = tf.Variable(tf.zeros([n_classes]))
#     logits = tf.matmul(bottleneck_input, weights) + biases
logits=transfer_new_layer.new_layer(bottleneck_input,n_classes)
final_tensor = tf.nn.softmax(logits)

# losses
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=ground_truth_input)
cross_entropy_mean = tf.reduce_mean(cross_entropy)
train_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cross_entropy_mean)

# calculate the accurancy
with tf.name_scope('evaluation'):
    correct_prediction = tf.equal(tf.argmax(final_tensor, 1), tf.argmax(ground_truth_input, 1))
    evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

image_order_step = tf.arg_max(final_tensor, 1)

saver = tf.train.Saver(tf.global_variables(), write_version=tf.train.SaverDef.V1)

with tf.Session() as sess:
    init = tf.global_variables_initializer()
    sess.run(init)
    if os.path.exists('F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt'):
        saver.restore(sess,"F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt")
        reader = tf.train.NewCheckpointReader('F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
        all_variables = reader.get_variable_to_shape_map()
        for each in all_variables:
            print(each, all_variables[each])
            print(reader.get_tensor(each))
    else:
        print("retrain model")
        for i in range(STEPS):
            train_bottlenecks, train_ground_truth = get_random_cached_bottlenecks(
                sess, n_classes, image_lists, BATCH, 'training', jpeg_data_tensor, bottleneck_tensor)
            sess.run(train_step,
                     feed_dict={bottleneck_input: train_bottlenecks, ground_truth_input: train_ground_truth})
            # 在验证数据上测试正确率
            if i % 100 == 0 or i + 1 == STEPS:
                validation_bottlenecks, validation_ground_truth = get_random_cached_bottlenecks(
                    sess, n_classes, image_lists, BATCH, 'validation', jpeg_data_tensor, bottleneck_tensor)
                validation_accuracy = sess.run(evaluation_step, feed_dict={
                    bottleneck_input: validation_bottlenecks, ground_truth_input: validation_ground_truth})
                print('Step %d: Validation accuracy on random sampled %d examples = %.1f%%' % (
                i, BATCH, validation_accuracy * 100))
        saver.save(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
        print(tf.get_session_tensor("final_training_ops/Variable",dtype=float))
        print(tf.get_session_tensor("final_training_ops/Variable_1",dtype=float))
    print('Beginning Test')
    # test
    test_bottlenecks, test_ground_truth = get_tst_bottlenecks(sess, image_lists, n_classes,
                                                                             jpeg_data_tensor,
                                                                             bottleneck_tensor)
    # saver.restore(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
    test_accuracy = sess.run(evaluation_step, feed_dict={
        bottleneck_input: test_bottlenecks, ground_truth_input: test_ground_truth})
    print('Final test accuracy = %.1f%%' % (test_accuracy * 100))

    label_name_list = list(image_lists.keys())
    for label_index, label_name in enumerate(label_name_list):
        category = 'testing'
        for index, unused_base_name in enumerate(image_lists[label_name][category]):
            bottlenecks = []
            ground_truths = []
            print("real lable%s:" % label_name)
            # print(unused_base_name)
            bottleneck = get_or_create_bottleneck(sess, image_lists, label_name, index, category,
                                                                 jpeg_data_tensor, bottleneck_tensor)
            # saver.restore(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
            ground_truth = np.zeros(n_classes, dtype=np.float32)
            ground_truth[label_index] = 1.0
            bottlenecks.append(bottleneck)
            ground_truths.append(ground_truth)
            image_kind = sess.run(image_order_step, feed_dict={
                bottleneck_input: bottlenecks, ground_truth_input: ground_truths})
            image_kind_order = int(image_kind[0])
            print("pre_lable%s:" % label_name_list[image_kind_order])

2 个答案:

答案 0 :(得分:0)

尝试使用此方法进行保存和恢复:

saver = tf.train.Saver() 
with tf.Session() as sess: 
sess.run(initVar) 

# restore saved model
new_saver = tf.train.import_meta_graph('my-model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))


# save model weights, after training process
saver.save(sess, 'my-model')  

在会话外定义tf.train.Saver。完成训练后,将权重保存saver.save(sess, 'my-model')。并恢复上面的权重。

答案 1 :(得分:0)

我知道我错在哪里......,事实是我已成功恢复模型,但因为我每次都使用rand创建结果列表,当我使用image_order_step = tf.arg_max(final_tensor, 1)计算其类型时测试图像,因为当我下次运行代码时,标签顺序改变,但重量和biaese仍然是最后一次,例如,第一次,标签列表是[A1,A2,A3,A4 ,A5,A6],并且在计算出image_order_step = tf.arg_max(final_tensor, 1)结果为3之后,结果将是A4,下一次标签列表变为[A5,A3,A1,A6,A2,A4] ,但image_order_step = tf.arg_max(final_tensor, 1)结果仍然是3,所以预测结果将是A6,所以精度每次都会改变,完全由兰特改变... 这个问题告诉我,要小心每一个细节,否则一点点的错误会让你长时间困惑。 OVER!