LSTM fit_generator steps_per_epoch

时间:2019-08-29 07:14:37

标签: machine-learning keras lstm

由于数据量大,我使用带有自定义生成器的fit_generator来训练LSTM模型。

我以前没有将LSTM与fit_generator一起使用,所以我不知道我的代码是否正确。

def generator_v2(trainDir,nb_classes,batch_size):
print('start generator')
classes = ["G11","G15","G17","G19","G32","G34","G48","G49"]
while 1:
    print('loop generator')
    for root, subdirs, files in os.walk(trainDir):
        for file in files:
            try:

                label = root.split("\\")[-1]
                label = classes.index(label)
                label = to_categorical(label,num_classes=nb_classes).reshape(1,nb_classes)
                df = pd.read_csv(root +"\\"+ file)
                batches = int(np.ceil(len(df) / batch_size))
                for i in range(0, batches):
                    x_batch = df[i * batch_size:min(len(df), i * batch_size + batch_size)].values
                    x_batch = x_batch.reshape(1, x_batch.shape[0], x_batch.shape[1])
                    yield x_batch, label

                del df

            except EOFError:
                print("error" + file)

trainDir = "data_diff_level2_statistics"
nb_classes = 8
batch_size = 128
MaxLen = 449    # each csv file has 449 rows,
batches = int(np.ceil(MaxLen / batch_size))
filesCount = sum([len(files) for r, d, files in os.walk(trainDir)])  # the number of all files

steps_per_epoch = batches*filesCount

model = Sequential()

model.add(LSTM(4,input_shape=(None,5)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta',metrics=['acc'])

model.fit_generator(generator_v2(trainDir,nb_classes,batch_size),steps_per_epoch=steps_per_epoch, nb_epoch = 100, verbose=1)

我是否设置了正确的step_per_epoch数量?

我所有的训练数据形状为:(230,449,5)

因此,我将steps_per_epoch设置为230 *(449 / batch_size)。

(449 / batch_size)表示我一次读取一个csv文件128行。

1 个答案:

答案 0 :(得分:0)

自变量steps_per_epoch等于样本总数(训练集的长度)除以batch_size(validation_steps也可用)。

在您的示例中,数据集的长度由dataset_length = number_of_csv_files * length_of_csv_file给出。

因此,由于您的代码为230 *(449 / batch_size),所以您的代码是正确的,这与我上面写的类似。