如何在Keras中使用具有共享权重应用程序模型的multi-gpu

时间:2019-06-06 13:02:17

标签: python keras

我想在多GPU的应用程序(例如VGG16)中使用keras。但是有一些错误。

我尝试使用单GPU,这是正确的。但是多处理器是错误的。 像这样的代码:

import keras
    with tf.device('/cpu:0'):
        input1 = keras.layers.Input(config.input_shape)
        input2 = keras.layers.Input(config.input_shape)
        sub_model = keras.applications.VGG16(include_top=False, weights=config.VGG_MODEL_PATH,
                                             input_shape=config.input_shape)
        output1 = sub_model(input1)
        output2 = sub_model(input1)
        model = keras.Model(inputs=[input1, input2], outputs=[output1, output2])
    parallel_model = keras.utils.multi_gpu_model(model, gpus=3)
    parallel_model.compile('sgd', loss=['mse', 'mse'])
    parallel_model.fit((np.random.random([10, 128, 128, 3]), np.random.random([10, 128, 128, 3])),
                       (np.random.random([10, 4, 4, 512]), np.random.random([10, 4, 4, 512])))

错误消息是

Traceback (most recent call last):
  File "/data00/home/liangdong.tony/PycharmProject/RetrievalCCWebVideo/AE/demo.py", line 145, in <module>
    parallel_model = keras.utils.multi_gpu_model(model, gpus=3)
  File "/data00/home/liangdong.tony/.local/lib/python2.7/site-packages/keras/utils/training_utils.py", line 177, in multi_gpu_model
    return Model(model.inputs, merged)
  File "/data00/home/liangdong.tony/.local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/data00/home/liangdong.tony/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 1811, in __init__
    'Layer names: ', all_names)
RuntimeError: ('The name "vgg16" is used 2 times in the model. All layer names should be unique. Layer names: ', ['input_1', 'input_2', 'lambda_1', 'lambda_2', 'lambda_3', 'lambda_4', 'lambda_5', 'lambda_6', 'model_1', 'vgg16', 'vgg16'])

2 个答案:

答案 0 :(得分:0)

我只是猜测,但是它在您的错误日志中显示“模型中名称“ vgg16”使用了2次”。

我猜是否用

创建output1和output2
    output1 = sub_model(input1)

    output2 = sub_model(input1)

并将其添加到模型中,即可创建VGG16模型的重复图层名称。 也许您可以使用另一个输入(input2)?

您也可以尝试重命名模型:

output1 = sub_model(input1)
sub_model.name="VGG16_2"

output2 = sub_model(input1) 

如果您可以提供更多代码,我可能会测试您的代码并尝试解决此问题:)

This似乎也是一个类似的问题。

希望这会有所帮助。

答案 1 :(得分:0)

我发现有一个非明智的解决方案。 有解决方案代码:

import tensorflow as tf
from tensorflow.keras import backend as K


def slice_batch(x, n_gpus, part):
    sh = K.shape(x)
    L = sh[0] // n_gpus
    if part == n_gpus - 1:
        return x[part * L:]
    return x[part * L:(part + 1) * L]


def multi_gpu_wrapper(single_model, num_gpu):
    inputs = single_model.inputs
    towers = []
    splited_layer = tf.keras.layers.Lambda(lambda x: slice_batch(x, num_gpu, gpu_id))
    concate_layer = tf.keras.layers.Concatenate(axis=0)
    with tf.device('/cpu:0'):
        for gpu_id in range(num_gpu):
            cur_inputs = []
            for input in inputs:
                cur_inputs.append(
                    splited_layer(input)
                )
            towers.append(single_model(cur_inputs))
            print towers[-1]
    outputs = []
    num_output = len(towers[-1])
    with tf.device('/cpu:0'):
        for i in range(num_output):
            tmp_outputs = []
            for j in range(num_gpu):
                tmp_outputs.append(towers[j][i])
            outputs.append(concate_layer(tmp_outputs))
    multi_gpu_model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
    return multi_gpu_model


if __name__ == '__main__':
    import config
    import os
    import numpy as np
    gpu_ids = "0,1,3"
    os.environ["CUDA_VISIBLE_DEVICES"] = gpu_ids
    with tf.device('/cpu:0'):
        input1 = tf.keras.layers.Input(config.input_shape)
        input2 = tf.keras.layers.Input(config.input_shape)
        sub_model = tf.keras.applications.VGG16(include_top=False, weights=config.VGG_MODEL_PATH,
                                                input_shape=config.input_shape)
        output1 = sub_model(input1)
        output2 = sub_model(input2)
        model = tf.keras.Model(inputs=[input1, input2], outputs=[output1, output2])
    multi_gpu_model = multi_gpu_wrapper(model, 3)
    multi_gpu_model.compile('sgd', loss=['mse', 'mse'])
    multi_gpu_model.fit([np.random.random([1000, 128, 128, 3]), np.random.random([1000, 128, 128, 3])],
                        [np.random.random([1000, 4, 4, 512]), np.random.random([1000, 4, 4, 512])], batch_size=128)

但是,我发现此解决方案中GPU的使用率很低。