Keras fit_generator和拟合结果不同

时间:2018-10-01 23:20:42

标签: python-3.x tensorflow keras deep-learning generator

我正在使用面部图像数据集训练卷积神经网络。数据集包含10,000张尺寸为700 x 700的图像。我的模型有12层。我正在使用生成器函数将图像读取到Keras fit_generator函数中,如下所示。

train_file_names ==>包含训练实例文件名的Python列表
train_class_labels ==>一键编码的类标签的整数数组([0,1,0],[0,0,1]等)
train_data ==>大量的训练实例
train_steps_epoch ==> 16(批次大小为400,我有6400个实例需要训练。因此,一次遍历整个数据集需要16次迭代)
batch_size ==> 400
calls_made ==>当生成器到达训练实例的末尾时,它将重置索引以在下一个时期从第一个索引加载数据。

我将此生成器作为参数传递给keras'fit_generator'函数,以便为每个时期生成新的数据批。

val_data,val_class_labels ==>验证数据Numpy数组
纪元==>纪元数

使用Keras fit_generator

model.fit_generator(generator=train_generator, steps_per_epoch=train_steps_per_epoch, epochs=epochs, use_multiprocessing=False, validation_data=[val_data, val_class_labels], verbose=True, callbacks=[history, model_checkpoint], shuffle=True, initial_epoch=0) 

代码

def train_data_generator(self):     
    index_start = index_end = 0 
    temp = 0
    calls_made = 0

    while temp < train_steps_per_epoch:
        index_end = index_start + batch_size
        for temp1 in range(index_start, index_end):
            index = 0
            # Read image
            img = cv2.imread(str(TRAIN_DIR / train_file_names[temp1]), cv2.IMREAD_GRAYSCALE).T
            train_data[index]  = cv2.resize(img, (self.ROWS, self.COLS), interpolation=cv2.INTER_CUBIC)
            index += 1       
        yield train_data, self.train_class_labels[index_start:index_end]
        calls_made += 1
        if calls_made == train_steps_per_epoch:
            index_start = 0
            temp = 0
            calls_made = 0
        else:
            index_start = index_end
            temp += 1  
        gc.collect()

fit_generator的输出

Epoch 86/300
16/16 [==============================]-16s 1s / step-损耗:1.5739-acc:0.2991-val_loss :12.0076-val_acc:0.2110
时代87/300
16/16 [==============================]-16s 1s / step-损失:1.6010-acc:0.2549-val_loss :11.6689-val_acc:0.2016
时代88/300
16/16 [==============================]-16s 1s / step-损失:1.5750-acc:0.2391-val_loss :10.2663-val_acc:0.2004
时代89/300
16/16 [==============================]-16s 1s / step-损失:1.5526-acc:0.2641-val_loss :11.8809-val_acc:0.2249
时代90/300
16/16 [==============================]-16s 1s / step-损失:1.5867-acc:0.2602-val_loss :12.0392-val_acc:0.2010
时代91/300
16/16 [==============================]-16s 1s / step-损失:1.5524-acc:0.2609-val_loss :12.0254-val_acc:0.2027

我的问题是,在将“ fit_generator”与上述生成器函数一起使用时,我的模型损失根本没有改善,验证准确性也很差。但是,当我按以下方式使用keras的“拟合”函数时,模型损失会减少,验证准确性会更好。

使用Keras拟合函数而不使用生成器

model.fit(self.train_data, self.train_class_labels, batch_size=self.batch_size, epochs=self.epochs, validation_data=[self.val_data, self.val_class_labels], verbose=True, callbacks=[history, model_checkpoint])    

使用拟合函数训练后的输出

第25/300版
6400/6400 [==============================]-20s 3ms / step-损耗:0.0207-acc:0.9939-val_loss :4.1009-val_acc:0.4916
时代26/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0197-acc:0.9948-val_loss :2.4758-val_acc:0.5568
时代27/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0689-acc:0.9800-val_loss :1.2843-val_acc:0.7361
时代28/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0207-acc:0.9947-val_loss :5.6979-val_acc:0.4560
时代29/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0353-acc:0.9908-val_loss :1.0801-val_acc:0.7817
时代30/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0362-acc:0.9896-val_loss :3.7851-val_acc:0.5173
时代31/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0481-acc:0.9896-val_loss :1.1152-val_acc:0.7795
时代32/300
6400/6400 [==============================]-20s 3ms / step-损耗:0.0106-acc:0.9969-val_loss :1.4803-val_acc:0.7372

2 个答案:

答案 0 :(得分:0)

您必须确保您的数据生成器在各个时期之间对数据进行混洗。我建议您在循环外创建一个可能的索引列表,使用random.shuffle将其随机化,然后在循环内对其进行迭代。

来源:https://github.com/keras-team/keras/issues/2389和自己的经验。

答案 1 :(得分:0)

这很可能是由于数据生成器中缺乏数据改组。我遇到了同样的问题。我更改了shuffle = True,但没有成功。然后,我在自定义生成器中集成了一个shuffle。这是Keras文档建议的自定义生成器:

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size

    def __len__(self):
        return math.ceil(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
        return batch_x, batch_y

这是里面带有随机播放功能的

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
        self.indices = np.arange(self.x.shape[0])

    def __len__(self):
        return math.ceil(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_x = self.x[inds]
        batch_y = self.y[inds]
        return batch_x, batch_y
    
    def on_epoch_end(self):
        np.random.shuffle(self.indices)

然后,模型收敛良好。 fculinovic的积分。