TensorFlow可以自动安排对所有可用GPU的操作吗?

时间:2016-08-04 07:07:26

标签: gpu tensorflow deep-learning

我们已经阅读了关于调度的TensorFlow文章。它可以预先执行Graph并找到“正确”设备来进行操作。

但我们测试使用tf.Session(config=tf.ConfigProto(log_device_placement=True))并且未指定任何设备运行。我们发现所有操作都放在第一个GPU中。

日志看起来像这样。

Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0

Variable也放在GPU中。我保证调度程序现在不够好,用户的最佳做法是我们应该指定使用CPU或GPU的操作,尤其是当我们有多个GPU时。是对的吗?

1 个答案:

答案 0 :(得分:4)

从v0.9开始,TensorFlow将所有操作放在你拥有的第一个GPU上。因此,您所观察到的是100%预期。现在,如果您的问题是" TensorFlow能否在我的4个GPU上自动分发我的图表而不需要我的干预?",截至2016年8月的答案是否定的。

如果您尝试利用本地计算机可用的所有GPU的电源,请查看此variation of the cifar10 tutorial。下一个级别是replicated training with distributed tensorflow,但这可能对你想要做的事情有点过分。

随着所有虚拟化的发展,特定操作分配给哪些设备的问题很快就会变得无关紧要。