如何在tensorflow中使用keras.utils.Sequence数据生成器和tf.distribute.MirroredStrategy进行多GPU模型训练?

时间:2019-12-04 22:51:52

标签: keras tensorflow2.0 multi-gpu

我想使用Tensorflow 2.0在多个GPU上训练模型。在用于分布式训练的Tensorflow教程(https://www.tensorflow.org/guide/distributed_training)中,tf.data数据生成器被转换为分布式数据集,如下所示:

dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)

但是,我想改用我自己的自定义数据生成器(例如,keras.utils.Sequence数据生成器,以及keras.utils.data_utils.OrderedEnqueuer用于异步批处理生成)。但是mirrored_strategy.experimental_distribute_dataset方法仅支持tf.data数据生成器。我该如何使用keras数据生成器呢?

谢谢!

2 个答案:

答案 0 :(得分:0)

在相同情况下,我将tf.data.Dataset.from_generatorkeras.utils.sequence一起使用,它解决了我的问题!

train_generator = SegmentationMultiGenerator(datasets, folder) # My keras.utils.sequence object

def generator():
    multi_enqueuer = OrderedEnqueuer(train_generator, use_multiprocessing=True)
    multi_enqueuer.start(workers=10, max_queue_size=10)
    while True:
        batch_xs, batch_ys, dset_index = next(multi_enqueuer.get()) # I have three outputs
        yield batch_xs, batch_ys, dset_index

dataset = tf.data.Dataset.from_generator(generator,
                                         output_types=(tf.float64, tf.float64, tf.int64),
                                         output_shapes=(tf.TensorShape([None, None, None, None]),
                                                        tf.TensorShape([None, None, None, None]),
                                                        tf.TensorShape([None, None])))

strategy = tf.distribute.MirroredStrategy()

train_dist_dataset = strategy.experimental_distribute_dataset(dataset)

请注意,这是我的第一个可行的解决方案-目前,我发现最方便的做法是将“无”替换为我发现可以使用的实际输出形状。

答案 1 :(得分:0)

不使用 Enqueuer,这是另一种方法,假设您有一个生成器 dg,它在调用时以 (feature, label) 的形式生成样本:

import tensorflow as tf
import numpy as np


def get_tf_data_Dataset(data_generator_settings_dict):
    length_req = data_generator_settings_dict["length"]
    x_d1 = data_generator_settings_dict['x_d1']
    x_d2 = data_generator_settings_dict['x_d2']
    x_d3 = data_generator_settings_dict['x_d3']
    y_d1 = data_generator_settings_dict['y_d1']
    x_d2 = data_generator_settings_dict['x_d2']
    y_d3 = data_generator_settings_dict['y_d3']
    list_of_x_arrays = [np.zeros((x_d1, x_d2, x_d3)) for _ in range(length_req)]
    list_of_y_arrays = [np.zeros((y_d1, y_d2, y_d3)) for _ in range(length_req)]
    list_of_tuple_samples = [(x, y) for (x, y) in dg()]
    list_of_x_samples = [x for (x, y) in list_of_tuple_samples]
    list_of_y_samples = [y for (x, y) in list_of_tuple_samples]
    for sample_index in range(length_req):
        list_of_x[sample_index][:] = list_of_x_samples[sample_index]
        list_of_y[sample_index][:] = list_of_y_samples[sample_index]
    return tf.data.Dataset.from_tensor_slices((list_of_x, list_of_y))

它很复杂,但保证有效。这也意味着 dg 的 __call__ 方法是一个 for 循环(当然在 __init__ 之后):

def __call__(self):
    for _ in self.length:
        # generate x (single sample of feature)
        # generate y (single matching sample of label)
        yield x, y