使用keras序列的Keras DataGenerator

时间:2019-08-06 12:58:47

标签: python tensorflow keras

我试图提高模型的训练速度。我做了很多预处理和扩充(它们在CPU上运行),这使我的训练变慢了。因此,我尝试使用keras Sequence来实现数据的加载和预处理。因此,我遵循了keras docs和这个stanford exmaple。到目前为止,这使我的训练慢了很多,我敢肯定我在某个地方有一个错误。因为使用4个workersuse_multiprocessing=True运行我的训练脚本,所以我得到以下日志:

Epoch 8/10
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
8/9 [=========================>....] - ETA: 2s - loss: 444.2380Using TensorFlow backend.
9/9 [==============================] - 26s 3s/step - loss: 447.4939 - val_loss: 308.3012
Using TensorFlow backend.
Epoch 9/10
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
8/9 [=========================>....] - ETA: 2s - loss: 421.9372Using TensorFlow backend.
9/9 [==============================] - 26s 3s/step - loss: 418.9702 - val_loss: 263.9197

似乎在我的代码中的某个地方,每个时期都为每个工作程序加载和加载了TensorFlow(由于验证集而设置为8个)。我认为这不是序列正常工作的方式吗?

DataGenerator:

class DataGenerator(Sequence):
    def __init__(self, annotation_lines, batch_size, input_shape, anchors, num_classes, max_boxes=80):
        self.annotations_lines = annotation_lines
        self.batch_size = batch_size
        self.input_shape = input_shape
        self.anchors = anchors
        self.num_classes = num_classes
        self.max_boxes = max_boxes

    def __len__(self):
        return int(np.ceil(len(self.annotations_lines) / float(self.batch_size)))

    def __getitem__(self, idx):
        annotation_lines = self.annotations_lines[idx * self.batch_size:(idx + 1) * self.batch_size]

        image_data = []
        box_data = []
        for annotation_line in annotation_lines:
            image, box = get_random_data(annotation_line, self.input_shape, random=True, max_boxes=self.max_boxes)
            image_data.append(image)
            box_data.append(box)
        image_data = np.array(image_data)
        box_data = np.array(box_data)
        y_true = preprocess_true_boxes(box_data, self.input_shape, self.anchors, self.num_classes)
        return [image_data, *y_true], np.zeros(self.batch_size)

我的训练脚本的一部分:

batch_size = batch_size_complete  # note that more GPU memory is required after unfreezing the body

data_gen_train = DataGenerator(lines, batch_size, input_shape, anchors, num_classes)
data_gen_validation = DataGenerator(validation_lines, batch_size, input_shape, anchors, num_classes)

print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
r = model.fit_generator(data_gen_train,
                        steps_per_epoch=max(1, num_train // batch_size),
                        validation_data=data_gen_validation,
                        validation_steps=max(1, num_val // batch_size),
                        epochs=epochs,
                        initial_epoch=initial_epoch,
                        callbacks=[logging, checkpoint, reduce_lr, early_stopping],
                        workers=workers,
                        use_multiprocessing=True)
model.save_weights(log_dir + 'trained_weights_final.h5')

2 个答案:

答案 0 :(得分:0)

训练的速度取决于许多因素,例如批次大小,输入图像的大小,学习率,历元步骤和步骤验证。然后开始调查这些原因之一,并放入use_multiprocessing=False,因为 培训期间编写的各种张量流后端不应该存在。

答案 1 :(得分:0)

我看到您很多时候都在使用“使用Tensorflow后端”,这似乎好像Keras在每个线程中一次又一次地初始化。

也许您应该简单地尝试use_multiprocessing=False(您仍然可以有很多工作人员)