我正在尝试将单GPU TensorFlow代码扩展到多GPU。我必须研究3个自由度,但不幸的是,我需要使用tf.map_fn在第三个自由度上并行化。我尝试使用官方文档中所示的设备放置方式,但是似乎无法通过tf.map_fn
来实现。有没有办法在多个GPU上运行tf.map_fn
?
以下是错误输出:
InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'map_1/TensorArray_1': Could not satisfy explicit device specification '' because the node was colocated with a group of nodes that required incompatible device '/device:GPU:1'
Colocation Debug Info:
Colocation group had the following types and devices:
TensorArrayGatherV3: GPU CPU
Range: GPU CPU
TensorArrayWriteV3: GPU CPU
TensorArraySizeV3: GPU CPU
MatMul: GPU CPU
Enter: GPU CPU
TensorArrayV3: GPU CPU
Const: GPU CPU
Colocation members and user-requested devices:
map_1/TensorArrayStack/range/delta (Const)
map_1/TensorArrayStack/range/start (Const)
map_1/TensorArray_1 (TensorArrayV3)
map_1/while/TensorArrayWrite/TensorArrayWriteV3/Enter (Enter) /device:GPU:1
map_1/TensorArrayStack/TensorArraySizeV3 (TensorArraySizeV3)
map_1/TensorArrayStack/range (Range)
map_1/TensorArrayStack/TensorArrayGatherV3 (TensorArrayGatherV3)
map_1/while/MatMul (MatMul) /device:GPU:1
map_1/while/TensorArrayWrite/TensorArrayWriteV3 (TensorArrayWriteV3) /device:GPU:1
[[Node: map_1/TensorArray_1 = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name=""](map_1/TensorArray_1/size)]]
下面是一个简单的代码示例来重现它:
import tensorflow as tf
import numpy
rc = 1000
sess = tf.Session()
for deviceName in ['/cpu:0', '/device:GPU:0', '/device:GPU:1']:
with tf.device(deviceName):
matrices = tf.random_uniform([rc,rc,4],minval = 0, maxval = 1, dtype = tf.float32)
def mult(i):
product = tf.matmul(matrices[:,:,i],matrices[:,:,i+1])
return product
mul = tf.zeros([rc,rc,3], dtype = tf.float32)
mul = tf.map_fn(mult, numpy.array([0,1,2]), dtype = tf.float32, parallel_iterations = 10)
m = sess.run(mul)
答案 0 :(得分:1)
您可以尝试通过批处理matmul来完成。请考虑以下更改:
import tensorflow as tf
import numpy
import time
import numpy as np
rc = 1000
sess = tf.Session()
#compute on cpu for comparison later
vals = np.random.uniform(size=[rc,rc,4]).astype(np.float32)
mat1 = tf.identity(vals)
mat2 = tf.transpose(vals, [2, 0, 1])
#store mul in array so all are fetched in run call
muls = []
#I only have one GPU.
for deviceName in ['/cpu:0', '/device:GPU:0']:
with tf.device(deviceName):
def mult(i):
product = tf.matmul(mat1[:,:,i],mat1[:,:,i+1])
return product
mul = tf.zeros([rc,rc,3], dtype = tf.float32)
mul = tf.map_fn(mult, numpy.array([0,1,2]), dtype = tf.float32, parallel_iterations = 10)
muls.append(mul)
#use transposed mat with a shift to matmul in one go
mul = tf.matmul(mat2[:-1], mat2[1:])
print(muls)
print(mul)
start = time.time()
m1 = sess.run(muls)
end = time.time()
print("muls:", end - start)
start = time.time()
m2 = sess.run(mul)
end = time.time()
print("mul:", end - start)
print(np.allclose(m1[0],m1[1]))
print(np.allclose(m1[0],m2))
print(np.allclose(m1[1],m2))
我的PC上的结果是:
[<tf.Tensor 'map/TensorArrayStack/TensorArrayGatherV3:0' shape=(3, 1000, 1000) dtype=float32>, <tf.Tensor 'map_1/TensorArrayStack/TensorArrayGatherV3:0' shape=(3, 1000, 1000) dtype=float32>]
Tensor("MatMul:0", shape=(3, 1000, 1000), dtype=float32)
muls: 0.4262731075286865
mul: 0.3794088363647461
True
True
True
您很少希望将CPU与GPU同步使用,因为这将成为瓶颈。 GPU将等待CPU完成。如果您对CPU进行任何操作,它都应该与GPU异步,以便它们可以完全倾斜运行。