我当时正在使用Tensorflow C ++进行图像分类。我的部分代码如下,
std::vector<Tensor> outputs;
tensorflow::Tensor a(tensorflow::DT_BOOL, tensorflow::TensorShape({ }));
tensorflow::Tensor b(tensorflow::DT_FLOAT, tensorflow::TensorShape({ 1}));
tensorflow::Tensor c(tensorflow::DT_INT64, tensorflow::TensorShape({ 1 }));
b.vec<float>()(0) = 1.0;
a.scalar<bool>()() = false;
tensorflow::TensorShape shape = resized_tensor.shape();
Status run_status = session->Run({ { input_layer, resized_tensor },{ input_layer2, b } ,{ input_layer1, a } }, { output_layer }, {}, &outputs);
if (!run_status.ok()) {
LOG(ERROR) << "Running model failed: " << run_status;
return -1;
}
第一次调用Run()函数,输出如下:
2019-05-01 18:02:36.575459: I D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3039 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-05-01 18:02:36.620825: I D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0
2019-05-01 18:02:36.624719: I D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-01 18:02:36.628097: I D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0
2019-05-01 18:02:36.631864: I D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N
When I call the Run function again, It has the same message,
D:\projects\c++\tensorflow_GPU\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3039 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
如何避免在再次调用Run()
后再次张量流调用“创建的Tensorflow设备”?会不会导致速度变慢?