如何使用训练有素的张量流网络进行推理?

时间:2017-12-31 21:08:25

标签: tensorflow

我是张力流的新手,我希望你能帮助我。

我建立了张量流CNN网络并成功训练了它。训练数据集是matlab数组。现在我想使用训练有素的网络进行推理。我不知道如何编写用于推理的python代码。

  1. 在训练期间,我保存了模式。我不确定如何在推理中加载模型。
  2. 我的推理数据也是一个matlab数组,与训练数据相同。我怎么用呢?在训练期间,我使用了Tensorlayer的miniPatch,我应该在推理二中使用miniPatch吗?
  3. 以下是我的推理代码:它产生了很多错误:

    print("\n\nPreparing testing data........................")
    test_data = sio.loadmat('MyTest.mat')
    Z0 = test_data['Real_testing1']
    img_num_test = Z0.shape[0]
    X_test = np.empty([img_num_test, 128, 128, 1], dtype=float)
    X_test[:,:,:,0] = Z0
    Y_test = np.column_stack((np.ones([img_num_test, 1], dtype=int),np.zeros([img_num_test, 1], dtype=int)))
    print("\tTesting X shape: {0}".format(X_test.shape))
    print("\tTesting Y shape: {0}".format(Y_test.shape))
    
    
    print("\n\Restore the network ...")
    save_dir = "checkpoints/";
    epoch = 1000
    model_name = save_dir + str(epoch) + '_model'
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
    saver = tf.train.Saver().restore(sess, save_path=model_name)
    
    start_time_begin = time.time()
    
    print("\n\Running network...")
    
    start_time = time.time()
    
    y = model.Scribenet(X_test[0, :, :, :], False, 1.0)
    y = sess.run([y], feed_dict=feed_dict)
    print(y[0:9])
    
    sess.close()
    

    以下是我的培训代码:

    x = tf.placeholder(tf.float32, shape=[None, 128, 128, 1], name='x')
    y_ = tf.placeholder(tf.int64, shape=[None, 2], name='y_')
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    is_training = tf.placeholder(tf.bool, name='is_traininng')
    
    net_in = x
    net_out = model.MyCNN(net_in, is_training, keep_prob)
    
    y = net_out
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_, name='cost'))
    correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
    acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    y_op = tf.argmax(tf.nn.softmax(y),1)
    
    train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
                                          epsilon=1e-08, use_locking=False).minimize(cost)
    
    sess.run(tf.global_variables_initializer())
    
    save_dir = "checkpoints/";
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
    saver = tf.train.Saver()
    
    print("\n\nStart training the network ...")
    start_time_begin = time.time()
    for epoch in range(n_epoch):
        start_time = time.time()
        loss_ep = 0; n_step = 0
        for X_train_a, y_train_a in tl.iterate.minibatches(X_train, Y_train,
                                                    batch_size, shuffle=True):
            feed_dict = {x: X_train_a, y_: y_train_a, is_training: True, keep_prob: train_keep_prob}
            loss, _ = sess.run([cost, train_op], feed_dict=feed_dict)
            loss_ep += loss
            n_step += 1
        loss_ep = loss_ep/ n_step
    
        if (epoch+1) % save_freq == 0:
            model_name = save_dir + str(epoch+1) + '_model'
            saver.save(sess, save_path=model_name)
    

1 个答案:

答案 0 :(得分:1)

主要问题似乎是您的推理代码中没有图表构建。您需要保存整个图形(在SavedModel format中),或者在推理代码中构建图形并通过训练检查点加载变量(可能是最容易启动的)。只要变量名称相同,您就可以将从训练图中保存的变量加载到推理图中。

因此,推理将是您的训练代码,但没有y_占位符且没有丢失/优化器逻辑。您可以提供单个图像(批量大小为1)以启动,因此也不需要批处理逻辑。