如何使用keras fit_generator处理最后一批

时间:2019-04-28 11:45:21

标签: python keras deep-learning generator

我正在使用定制的批处理生成器,以尝试解决由于使用训练数据中最后一批的小尺寸而使用标准model.fit()函数时形状不兼容的问题(BroadcastGradientArgs错误)。我将here中提到的批处理生成器与model.fit_generator()函数配合使用:

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
        self.indices = np.arange(self.x.shape[0])

    def __len__(self):
        return math.floor(self.x.shape[0] / self.batch_size) 

    def __getitem__(self, idx):
        inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size] #Line A
        batch_x = self.x[inds]
        batch_y = self.y[inds]
        return batch_x, batch_y

    def on_epoch_end(self):
        np.random.shuffle(self.indices)

但是,如果最后一个批次的大小小于提供的批次大小,则似乎会丢弃该批次。如何更新它以包括最后一批,并用一些重复的样本进行扩展(例如)?

而且,我不知道“ A线”的工作原理是什么!

更新: 这是我在模型中使用生成器的方式:

# dummy model
input_1 = Input(shape=(None,))
...
dense_1 = Dense(10, activation='relu')(input_1)
output_1 = Dense(1, activation='sigmoid')(dense_1)

model = Model(input_1, output_1)
print(model.summary())

#Compile and fit_generator
model.compile(optimizer='adam', loss='binary_crossentropy')

train_data_gen = Generator(x1_train, y_train, batch_size)
test_data_gen = Generator(x1_test, y_test, batch_size)

model.fit_generator(generator=train_data_gen, validation_data = test_data_gen, epochs=epochs, shuffle=False, verbose=1)

 loss, accuracy = model.evaluate_generator(generator=test_data_gen)
print('Test Loss: %0.5f Accuracy: %0.5f' % (loss, accuracy))

2 个答案:

答案 0 :(得分:2)

我想罪魁祸首是这条线

    return math.floor(self.x.shape[0] / self.batch_size)

用它代替它可能会起作用

    return math.ceil(self.x.shape[0] / self.batch_size) 

想象一下,如果您有100个样本,批次大小为32。应该划分为3.125个批次。但是,如果您使用math.floor,它将变为3且不等于0.125。

对于A行,如果批处理大小为32,则当索引为1时,[idx * self.batch_size:(idx + 1) * self.batch_size]将变为[32:64],换句话说,选择self.indices的第33到第64个元素

**更新2,将输入更改为无形状,并使用LSTM并添加评估

import os
os.environ['CUDA_VISIBLE_DEVICES'] = ""
import math
import numpy as np
from keras.models import Model
from keras.utils import Sequence
from keras.layers import Input, Dense, LSTM


class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
        self.indices = np.arange(self.x.shape[0])

    def __len__(self):
        return math.ceil(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]  # Line A
        batch_x = self.x[inds]
        batch_y = self.y[inds]
        return batch_x, batch_y

    def on_epoch_end(self):
        np.random.shuffle(self.indices)


# dummy model
input_1 = Input(shape=(None, 10))
x = LSTM(90)(input_1)
x = Dense(10)(x)
x = Dense(1, activation='sigmoid')(x)

model = Model(input_1, x)
print(model.summary())

# Compile and fit_generator
model.compile(optimizer='adam', loss='binary_crossentropy')

x1_train = np.random.rand(1590, 20, 10)
x1_test = np.random.rand(90, 20, 10)
y_train = np.random.rand(1590, 1)
y_test = np.random.rand(90, 1)

train_data_gen = Generator(x1_train, y_train, 256)
test_data_gen = Generator(x1_test, y_test, 256)

model.fit_generator(generator=train_data_gen,
                    validation_data=test_data_gen,
                    epochs=5,
                    shuffle=False,
                    verbose=1)

loss = model.evaluate_generator(generator=test_data_gen)
print('Test Loss: %0.5f' % loss)

此运行没有任何问题。

答案 1 :(得分:0)

除了其他答案中的策略外,可以根据您的范围(意图)以不同的方式解决此类问题。

如果您希望按照问题中的建议重复最后一批中的某些样本(直到最后一批的大小等于batch_size),则可以(例如)检查数据集中的最后一个样本是否为达到,如果可以,请做某事。例如:

N_batches = int(np.ceil(len(dataset) / batch_size))
batch_size = 32
batch_counter = 0
while True:
  current_batch = []
  idx_start = batch_size * batches_counter
  idx_end = batch_size * (batches_counter + 1)
  for idx in range(idx_start, idx_end):
      ## Next line sets idx to the index of the last sample in the dataset:
      idx = len(dataset)-1 if (idx > len(data_set)-1) else idx
      current_batch.append(dataset[idx])
      .
      .
      .
  batch_counter += 1
  if (batch_counter == N_batches):
     batch_counter = 0

显然,它不必是最后一个样本,它可以是(例如)数据集中的随机样本:

idx = random.randint(0,len(dataset) if (idx > len(data_set)-1) else idx

希望这会有所帮助。