重新训练预先训练好的ResNet-50型号,使用tf slim进行分类

时间:2018-02-23 11:42:22

标签: python tensorflow resnet pre-trained-model tensorflow-slim

我想用TensorFlow slim重新训练预先训练好的ResNet-50型号,稍后再将其用于分类目的。

ResNet-50设计为1000个类,但我希望只有10个类(土地覆盖类型)作为输出。

首先,我尝试仅为一个图像编码,我可以稍后概括。 所以这是我的代码:

from tensorflow.contrib.slim.nets import resnet_v1
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np

batch_size = 1
height, width, channels = 224, 224, 3
# Create graph
inputs = tf.placeholder(tf.float32, shape=[batch_size, height, width, channels])
with slim.arg_scope(resnet_v1.resnet_arg_scope()):
    logits, end_points = resnet_v1.resnet_v1_50(inputs, is_training=False)

saver = tf.train.Saver()    

with tf.Session() as sess:
    saver.restore(sess, 'd:/bitbucket/cnn-lcm/data/ckpt/resnet_v1_50.ckpt')
    representation_tensor = sess.graph.get_tensor_by_name('resnet_v1_50/pool5:0')
    #  list of files to read
    filename_queue = tf.train.string_input_producer(['d:/bitbucket/cnn-lcm/data/train/AnnualCrop/AnnualCrop_735.jpg']) 
    reader = tf.WholeFileReader()
    key, value = reader.read(filename_queue)
    img = tf.image.decode_jpeg(value, channels=3)    

    im = np.array(img)
    im = im.reshape(1,224,224,3)
    predict_values, logit_values = sess.run([end_points, logits], feed_dict= {inputs: im})
    print (np.max(predict_values), np.max(logit_values))
    print (np.argmax(predict_values), np.argmax(logit_values))

    #img = ...  #load image here with size [1, 224,224, 3]
    #features = sess.run(representation_tensor, {'Placeholder:0': img})

我对接下来的事情感到有点困惑(我应该打开一个图表,或者我应该加载网络的结构并加载权重,或加载批次。图像形状也存在问题。很多通用的文档,不容易解释:/

有关如何纠正代码以符合我的目的的任何建议吗?

测试图像:AnnualCrop735

AnnualCrop735

1 个答案:

答案 0 :(得分:0)

如果您提供num_classes kwargs,则resnet图层会为您提供预测。查看resnet_v1

的文档和代码

您需要在其上添加一个损失函数和训练操作,以便重用微调resnet_v1

...
with slim.arg_scope(resnet_v1.resnet_arg_scope()):
    logits, end_points = resnet_v1.resnet_v1_50(
        inputs,
        num_classes=10,
        is_training=True,
        reuse=tf.AUTO_REUSE)
...
...
    classification_loss = slim.losses.softmax_cross_entropy(
        predict_values, im_label)

    regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
    total_loss = classification_loss + regularization_loss

    train_op = slim.learning.create_train_op(classification_loss, optimizer)
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)

    slim.learning.train(
        train_op,
        logdir='/tmp/',
        number_of_steps=1000,
        save_summaries_secs=300,
        save_interval_secs=600)