我有两个GPU,想在TensorFlow中尝试一些分布式训练(模型并行性)。
两个GPU是:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: TITAN Xp COLLECTORS EDITION, pci bus id: 0000:04:00.0, compute capability: 6.1
/job:localhost/replica:0/task:0/device:GPU:1 -> device: 1, name: TITAN X (Pascal), pci bus id: 0000:82:00.0, compute capability: 6.1
我的计划是将LeNet分为两个部分,并将每个部分分配给一个GPU。
LeNet具有5层,我使用with tf.device('/gpu:0'):
将第1层分配给GPU 0,with tf.device('/gpu:1'):
将layer2-layer5分配给GPU 1。
我知道在此模型中无需执行模型并行性,但是我只想在小型模型中尝试模型并行性。
设备映射日志显示,所有操作均已按我的意愿分配给设备:
layer5/fc3_b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer5/fc3_b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer5/fc3_b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer5/fc3_w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer5/fc3_w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer5/truncated_normal/TruncatedNormal: (TruncatedNormal): /job:localhost/replica:0/task:0/device:GPU:1
layer5/truncated_normal/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:1
layer5/truncated_normal: (Add): /job:localhost/replica:0/task:0/device:GPU:1
layer5/fc3_w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer4/truncated_normal/TruncatedNormal: (TruncatedNormal): /job:localhost/replica:0/task:0/device:GPU:1
layer4/truncated_normal/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:1
layer4/truncated_normal: (Add): /job:localhost/replica:0/task:0/device:GPU:1
layer4/fc2_w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer3/truncated_normal/TruncatedNormal: (TruncatedNormal): /job:localhost/replica:0/task:0/device:GPU:1
layer3/truncated_normal/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:1
layer3/truncated_normal: (Add): /job:localhost/replica:0/task:0/device:GPU:1
layer3/fc1_w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:1
layer2/truncated_normal/TruncatedNormal: (TruncatedNormal): /job:localhost/replica:0/task:0/device:GPU:1
layer2/truncated_normal/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:1
layer2/truncated_normal: (Add): /job:localhost/replica:0/task:0/device:GPU:1
layer2/conv2_w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:1
init/NoOp_1: (NoOp): /job:localhost/replica:0/task:0/device:GPU:1
layer1/conv1_b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0
layer1/conv1_b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0
layer1/conv1_b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0
layer1/conv1_w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0
layer1/conv1_w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0
layer1/truncated_normal/TruncatedNormal: (TruncatedNormal): /job:localhost/replica:0/task:0/device:GPU:0
layer1/truncated_normal/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
layer1/truncated_normal: (Add): /job:localhost/replica:0/task:0/device:GPU:0
layer1/conv1_w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0
但是在timeline.json
中得到了不同的结果,如下图所示。
时间轴显示,似乎layer2-layer5的操作在GPU1中启动,但在GPU0上运行。我认为使用with tf.device('/gpu:1'):
并不是我想要的。
这在TensorFlow中是预期的吗?
这是我第一次问堆栈溢出问题,如果需要其他信息,请告知我,谢谢。
答案 0 :(得分:1)
这只是Chrome跟踪事件格式的产物。
流"/job:localhost/replica:0/task:0/devuce:GPU:0 Compute"
显示了为在GPU:0上执行的操作启动/排队CUDA内核的时间。
流"/job:localhost/replica:0/task:0/devuce:GPU:1 Compute"
显示了为在GPU:1上执行的操作启动/排队CUDA内核的时间。
所有匹配的流,"/device:GPU:0/stream.*
计算”显示了在所有 GPU上实际执行操作的时间。要找出实际在哪个GPU上执行操作,您需要查看在"/job:localhost/replica:0/task:0/devuce:GPU:.* Compute"
希望这能回答您的问题