在CPP中运行张量流模型

时间:2019-11-23 23:54:11

标签: c++ tensorflow deep-learning tensorflow-serving tensorflow2.0

我使用tf.keras训练了我的模型。我将此模型转换为“ .pb”,

import os
import tensorflow as tf
from tensorflow.keras import backend as K
K.set_learning_phase(0)

from tensorflow.keras.models import load_model
model = load_model('model_checkpoint.h5')
model.save('model_tf2', save_format='tf')

这将创建一个文件夹“ model_tf2”,其中包含“资产”,变量和save_model.pb

我正在尝试将此模型加载到cpp中。参考许多其他帖子(主要是Using Tensorflow checkpoint to restore model in C++),我现在可以加载模型。

    RunOptions run_options;
    run_options.set_timeout_in_ms(60000);
    SavedModelBundle model;
    auto status = LoadSavedModel(SessionOptions(), run_options, model_dir_path, tags, &model);
    if (!status.ok()) {
        std::cerr << "Failed: " << status1;
        return -1;
    }

cmd output showing model was loaded

上面的屏幕快照显示模型已加载。

我有以下问题

  1. 如何进行模型的正向传递?
  2. 我了解“标签”可以是gpu,发球,训练。。发球和gpu有什么区别?
  3. 我不了解LoadSavedModel的前两个参数,即会话选项和运行选项。他们的目的是什么?另外,您可以通过语法示例帮助我理解吗?我通过查看另一个stackoverflow帖子来设置run_options,但是我不明白它的目的。

谢谢!! :)

2 个答案:

答案 0 :(得分:0)

下面是Patwie在评论中提到的用于执行模型向前通过的代码:

#include <tensorflow/core/protobuf/meta_graph.pb.h>
#include <tensorflow/core/public/session.h>
#include <tensorflow/core/public/session_options.h>
#include <iostream>
#include <string>

typedef std::vector<std::pair<std::string, tensorflow::Tensor>> tensor_dict;

/**
 * @brief load a previous store model
 * @details [long description]
 *
 * in Python run:
 *
 *    saver = tf.train.Saver(tf.global_variables())
 *    saver.save(sess, './exported/my_model')
 *    tf.train.write_graph(sess.graph, '.', './exported/graph.pb, as_text=False)
 *
 * this relies on a graph which has an operation called `init` responsible to
 * initialize all variables, eg.
 *
 *    sess.run(tf.global_variables_initializer())  # somewhere in the python
 * file
 *
 * @param sess active tensorflow session
 * @param graph_fn path to graph file (eg. "./exported/graph.pb")
 * @param checkpoint_fn path to checkpoint file (eg. "./exported/my_model",
 * optional)
 * @return status of reloading
 */
tensorflow::Status LoadModel(tensorflow::Session *sess, std::string graph_fn,
                             std::string checkpoint_fn = "") {
  tensorflow::Status status;

  // Read in the protobuf graph we exported
  tensorflow::MetaGraphDef graph_def;
  status = ReadBinaryProto(tensorflow::Env::Default(), graph_fn, &graph_def);
  if (status != tensorflow::Status::OK()) return status;

  // create the graph
  status = sess->Create(graph_def.graph_def());
  if (status != tensorflow::Status::OK()) return status;

  // restore model from checkpoint, iff checkpoint is given
  if (checkpoint_fn != "") {
    tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING,
                                            tensorflow::TensorShape());
    checkpointPathTensor.scalar<std::string>()() = checkpoint_fn;

    tensor_dict feed_dict = {
        {graph_def.saver_def().filename_tensor_name(), checkpointPathTensor}};
    status = sess->Run(feed_dict, {}, {graph_def.saver_def().restore_op_name()},
                       nullptr);
    if (status != tensorflow::Status::OK()) return status;
  } else {
    // virtual Status Run(const std::vector<std::pair<string, Tensor> >& inputs,
    //                  const std::vector<string>& output_tensor_names,
    //                  const std::vector<string>& target_node_names,
    //                  std::vector<Tensor>* outputs) = 0;
    status = sess->Run({}, {}, {"init"}, nullptr);
    if (status != tensorflow::Status::OK()) return status;
  }

  return tensorflow::Status::OK();
}

int main(int argc, char const *argv[]) {
  const std::string graph_fn = "./exported/my_model.meta";
  const std::string checkpoint_fn = "./exported/my_model";

  // prepare session
  tensorflow::Session *sess;
  tensorflow::SessionOptions options;
  TF_CHECK_OK(tensorflow::NewSession(options, &sess));
  TF_CHECK_OK(LoadModel(sess, graph_fn, checkpoint_fn));

  // prepare inputs
  tensorflow::TensorShape data_shape({1, 2});
  tensorflow::Tensor data(tensorflow::DT_FLOAT, data_shape);

  // same as in python file
  auto data_ = data.flat<float>().data();
  data_[0] = 42;
  data_[1] = 43;

  tensor_dict feed_dict = {
      {"input_plhdr", data},
  };

  std::vector<tensorflow::Tensor> outputs;
  TF_CHECK_OK(
      sess->Run(feed_dict, {"sequential/Output_1/Softmax:0"}, {}, &outputs));

  std::cout << "input           " << data.DebugString() << std::endl;
  std::cout << "output          " << outputs[0].DebugString() << std::endl;

  return 0;
}
  1. 如果要使用GPU在模型上进行推理,可以将标签 Serve GPU 一起使用。 / p>

  2. C ++中的参数 session_options 等效于 tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)

这意味着,如果allow_soft_placement为true,则如果

,将在CPU上放置一个操作

(i)OP(或)没有GPU实现

(ii)没有已知的GPU设备或未注册(或)

(iii)需要与来自CPU的reftype输入位于同一位置。

  1. 如果要使用 run_options ,则使用参数 Profiler ,即提取图形的运行时统计信息执行。它将有关执行时间和内存消耗的信息添加到事件文件中,并允许您在tensorboard中查看此信息。

  2. 上面提到的代码给出了
  3. 使用 session_options run_options 的语法。

答案 1 :(得分:0)

这在TF1.5上效果很好

加载图功能

df1 <- structure(list(Intensity = c(1L, 2L, 3L, 1L, 3L, 5L), Factor = c("red", 
"red", "red", "green", "green", "green")), class = "data.frame", 
row.names = c(NA, 
-6L))

使用.pb模型和其他会话配置的路径调用负载图函数。加载模型后,您可以通过调用Run

进行正向传递
def feature_4():
    flower_update = input("Enter the name of the flower you wish to change the price:"
                          "Lily, Rose, Tulip, Iris, Daisy, Orchid, Dahlia, Peony")
    flower_new_price = input("Enter the updated price of the flower")
    with open('flowers.txt') as amend_price:

        for line in amend_price:
            flower_price = int(line.split(',')[1])
            flower_name = str(line.split(',')[0])

    if flower_name == flower_update:
        flower_price.append(flower_new_price)
    print("The new price of", flower_update, "is", flower_new_price)