什么是具有强度1边缘矩阵的设备互连StreamExecutor

时间:2018-09-05 19:48:50

标签: tensorflow nvidia

我有四个NVIDIA GTX 1080图形卡,并且在初始化会话时,会看到以下控制台输出:

Adding visible gpu devices: 0, 1, 2, 3
 Device interconnect StreamExecutor with strength 1 edge matrix:
      0 1 2 3 
 0:   N Y N N 
 1:   Y N N N 
 2:   N N N Y 
 3:   N N Y N 

我也有2张NVIDIA M60 Tesla图形卡,初始化看起来像:

Adding visible gpu devices: 0, 1, 2, 3
 Device interconnect StreamExecutor with strength 1 edge matrix:
      0 1 2 3 
 0:   N N N N 
 1:   N N N N 
 2:   N N N N 
 3:   N N N N 

我注意到自从上次将1080 gpu的更新从1.6更改为1.8以来,此输出已更改。它看起来像这样(不能精确地记住,只是回忆):

 Adding visible gpu devices: 0, 1, 2, 3
Device interconnect StreamExecutor with strength 1 edge matrix:
     0 1 2 3            0 1 2 3
0:   Y N N N         0: N N Y N
1:   N Y N N    or   1: N N N Y
2:   N N Y N         2: Y N N N
3:   N N N Y         3: N Y N N

我的问题是:

  • 这是什么设备互连
  • 它对计算能力有什么影响?
  • 为什么不同的GPU会有区别?
  • 由于硬件原因(故障,驱动程序不一致...),它会随着时间变化吗?

1 个答案:

答案 0 :(得分:3)

TL; DR

  

这是什么设备互连?

如Almog David在评论中所述,它告诉您一个GPU是否可以直接访问另一个GPU。

  

它对计算能力有什么影响?

唯一的效果就是多GPU训练。如果两个GPU具有设备互连,则数据传输会更快。

  

为什么不同的GPU会有区别?

这取决于硬件设置的拓扑。主板只有这么多的PCI-e插槽通过同一条总线连接。 (使用nvidia-smi topo -m检查拓扑)

  

由于硬件原因(故障,驱动程序不一致...),它会随着时间变化吗?

除非NVIDIA更改默认的枚举方案,否则我认为顺序不会随时间变化。 here

还有更多细节

说明

此消息是在BaseGPUDeviceFactory::CreateDevices函数中生成的。以给定的顺序遍历每对设备 并调用cuDeviceCanAccessPeer。正如Almog David在评论中所说,这仅表明您是否可以在设备之间执行DMA。

您可以进行一些测试以检查订单是否重要。请考虑以下代码段:

#test.py
import tensorflow as tf

#allow growth to take up minimal resources
config = tf.ConfigProto()
config.gpu_options.allow_growth = True

sess = tf.Session(config=config)

现在让我们检查CUDA_VISIBLE_DEVICES

中设备顺序不同的输出
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python3 test.py
...
2019-03-26 15:26:16.111423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-26 15:26:18.635894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-26 15:26:18.635965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 1 2 3 
2019-03-26 15:26:18.635974: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N Y N N 
2019-03-26 15:26:18.635982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1:   Y N N N 
2019-03-26 15:26:18.635987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2:   N N N Y 
2019-03-26 15:26:18.636010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3:   N N Y N 
...

$ CUDA_VISIBLE_DEVICES=2,0,1,3 python3 test.py
...
2019-03-26 15:26:30.090493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-26 15:26:32.758272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-26 15:26:32.758349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 1 2 3 
2019-03-26 15:26:32.758358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N N N Y 
2019-03-26 15:26:32.758364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1:   N N Y N 
2019-03-26 15:26:32.758389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2:   N Y N N 
2019-03-26 15:26:32.758412: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3:   Y N N N
...

您可以通过运行nvidia-smi topo -m获得有关连接的更详细说明。例如:

       GPU0      GPU1    GPU2   GPU3    CPU Affinity
GPU0     X       PHB    SYS     SYS     0-7,16-23
GPU1    PHB       X     SYS     SYS     0-7,16-23
GPU2    SYS      SYS     X      PHB     8-15,24-31
GPU3    SYS      SYS    PHB      X      8-15,24-31

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing a single PCIe switch
  NV#  = Connection traversing a bonded set of # NVLinks

我相信您在名单上的排名越低,转移越快。