TensorFlow选择GPU以从多个GPU中使用

时间:2016-08-17 16:56:04

标签: python cuda tensorflow gpu

我是TensorFlow的新用户,并按照TensorFlow网站上的说明安装了CUDA-7.5和cudnn-v4。调整TensorFlow配置文件并尝试从网站运行以下示例后:

python -m tensorflow.models.image.mnist.convolutional

我很确定TensorFlow正在使用其中一个GPU而不是另一个,但是,我希望它能使用速度更快的GPU。我想知道这个示例代码是否默认使用它找到的第一个GPU。如果是这样,我如何在python中的TensorFlow代码中选择使用哪个GPU?

运行示例代码时收到的消息是:

ldt-tesla:~$ python -m tensorflow.models.image.mnist.convolutional
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Tesla K20c
major: 3 minor: 5 memoryClockRate (GHz) 0.7055
pciBusID 0000:03:00.0
Total memory: 4.63GiB
Free memory: 4.57GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2f27390
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties:
name: Quadro K2200
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:02:00.0
Total memory: 3.95GiB
Free memory: 3.62GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1:   N Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20c, pci bus id: 0000:03:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:02:00.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
Initialized!

2 个答案:

答案 0 :(得分:6)

您可以将CUDA_VISIBLE_DEVICES环境变量设置为仅显示您想要的那些,在masking gpus上引用此示例:

CUDA_VISIBLE_DEVICES=1  Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1    Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES=”0,1”  Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3  Devices 0, 2, 3 will be visible; device 1 is masked

答案 1 :(得分:1)

您可以设置要在运行时在哪个GPU上运行程序,而不必将其也硬编码到脚本中。这样可以避免在没有多个GPU或没有太多GPU的设备上运行时出现问题。

假设您要在GPU#3上运行,您可以像这样操作:

CUDA_VISIBLE_DEVICES=3, python -m tensorflow.models.image.mnist.convolutional