防止多个gpu c ++上的tensorflow设备

时间:2018-06-07 23:37:51

标签: c++ tensorflow

我正在使用动态链接的tensorflow库在c ++代码上运行神经网络。

2018-06-07 19:03:10.578031: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9168 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0, compute capability: 6.1) 
2018-06-07 19:03:10.615271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 9822 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:17:00.0 Off |                  N/A |
|  0%   54C    P2   106W / 250W |  10734MiB / 11172MiB |     33%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:65:00.0  On |                  N/A |
|  1%   54C    P2    65W / 250W |  10655MiB / 11171MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     22230      C   ./demo                                     10723MiB |
|    1      1264      G   /usr/lib/xorg/Xorg                           309MiB |
|    1      3230      G   compiz                                       235MiB |
|    1     22230      C   ./demo                                     10095MiB |
|    1     25625      G   unity-control-center                           3MiB |
+-----------------------------------------------------------------------------+

代码

tf::Session* session_ptr;
auto options = tf::SessionOptions();
options.config.mutable_gpu_options()->set_visible_device_list("0");
auto status = NewSession(tf::SessionOptions(), &session_ptr);
session.reset(tensorflow::NewSession(options));

似乎没有阻止在所有gpus中设置设备的张量流。

我知道您可以使用CUDA_VISIBLE_DEVICES环境变量,但我需要在运行时执行此操作。我还需要在同一个gpu上运行这个程序的几个实例(可能是4个)

有没有办法做到这一点?

我也尝试过使用

tf::GraphDef graph_def;
auto status1 = ReadBinaryProto(tf::Env::Default(), tf_model, &graph_def);
tensorflow::graph::SetDefaultDevice("0",&graph_def);

还为两个gpu分配内存......

0 个答案:

没有答案