使用TensorFlow运行多GPU CNN时内存不足

时间:2019-05-22 23:14:08

标签: tensorflow deep-learning multi-gpu

我正在尝试在cifar10上运行一个简单的cnn,结合两个示例中的代码: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/6_MultiGPU/multigpu_cnn.py

https://github.com/exelban/tensorflow-cifar-10

我收到OOM错误。

我首先尝试使用具有完整cnn的代码,但没有多GPU支持,并且工作正常。接下来,我使用了多GPU代码,也可以正常运行。 合并它们是行不通的。

with tf.device('/cpu:0'):
        tower_grads = []
        reuse_vars = False

        # tf Graph input
        X = tf.placeholder(tf.float32, shape=[None, _IMAGE_SIZE * _IMAGE_SIZE * _IMAGE_CHANNELS], name='Input')
        Y = tf.placeholder(tf.float32, shape=[None, _NUM_CLASSES], name='Output')
        phase = tf.placeholder(tf.bool, name='phase')
#         learning_rate = tf.placeholder(tf.float32, shape=[], name='learning_rate')
        keep_prob = tf.placeholder(tf.float32)

        global_step = tf.get_variable(name='global_step', trainable=False, initializer=0)


        # Loop over all GPUs and construct their own computation graph
        for i in range(_NUM_GPUS):
            with tf.device('/gpu:{}'.format(i)):
#                 learning_rate = tf.placeholder(tf.float32, shape=[], name='learning_rate')
#                 keep_prob = tf.placeholder(tf.float32)
                # Split data between GPUs
                _x = X[i * _BATCH_SIZE: (i+1) * _BATCH_SIZE]
                _y = Y[i * _BATCH_SIZE: (i+1) * _BATCH_SIZE]
                print("x shape:",_x.shape)
                print("y shape:",_y.shape)
                # Because Dropout have different behavior at training and prediction time, we
                # need to create 2 distinct computation graphs that share the same weights.
                _x = tf.reshape(_x, [-1, _IMAGE_SIZE, _IMAGE_SIZE, _IMAGE_CHANNELS], name='images')
                # Create a graph for training
                logits_train, y_pred_cls = feed_net(_x, _NUM_CLASSES, keep_prob, reuse=reuse_vars, is_training=True)
                # Create another graph for testing that reuse the same weights
                logits_test, y_pred_cls = feed_net(_x, _NUM_CLASSES, keep_prob, reuse=True, is_training=False)

                # Define loss and optimizer (with train logits, for dropout to take effect)
                loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_train, labels=_y))
                optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
                grads = optimizer.compute_gradients(loss_op)

                # Only first GPU compute accuracy
                if i == 0:
                    # Evaluate model (with test logits, for dropout to be disabled)
                    correct_pred = tf.equal(tf.argmax(logits_test, 1), tf.argmax(_y, 1))
                    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

                reuse_vars = True
                tower_grads.append(grads)

        tower_grads = average_gradients(tower_grads)
        train_op = optimizer.apply_gradients(tower_grads)

在大约90次迭代(少于一个纪元)后,以大于1 gpu(第4个)的速度运行时发生错误。

ResourceExhaustedError: Ran out of GPU memory when allocating 0 bytes for 
     [[Node: softmax_cross_entropy_with_logits_sg_3 = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:3"](softmax_cross_entropy_with_logits_sg_3/Reshape, softmax_cross_entropy_with_logits_sg_3/Reshape_1)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[Node: main_params/map/while/Less_1/_206 = _Send[T=DT_BOOL, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1905_main_params/map/while/Less_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](main_params/map/while/Less_1)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

更新:

问题在于如何在GPU之间划分数据。 我使用tf.split(X, _NUM_GPUS)作为数据和标签,然后可以为每个GPU分配正确的数据块。

1 个答案:

答案 0 :(得分:0)

以下是解决方案: 问题在于如何在GPU之间划分数据。 我使用tf.split(X, _NUM_GPUS)作为数据和标签,然后可以为每个GPU分配正确的数据块。此外,只有一个GPU正在运行accuracy,因此它需要获取完整大小的数据。