使用大量的示例迭代地训练keras模型

时间:2019-06-21 07:14:04

标签: python python-3.x keras deep-learning sequential

我正在构建一个将大3d阵列的输入作为输入的卷积网络。由于阵列太大(60000,100,100),因此我在初始化输入时计算机出现内存错误。我可以分批训练模型吗?就像输入(1000,100,100)60次一样,这样我就不需要记住用于训练的数据,因此可以节省内存。

我正面临这个问题,因为我正试图处理庞大的数据集,并且正在矢量化其中的单词。

X_train = np.zeros((train.shape[0],length, vector_size), dtype=K.floatx())## this line raises memory error as this is of shape (60000,100,100)
#some other code to calculate word embeddings and fill those numbers in X-train and Y_train
convmodel = Sequential()
convmodel = Sequential()

convmodel.add(Conv1D(32, kernel_size=3, activation='elu', padding='same', input_shape=(length, vector_size))) #length = 100,vector_size=100
convmodel.add(Conv1D(32, kernel_size=3, activation='elu', padding='same'))
convmodel.add(Dropout(0.25))

convmodel.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
convmodel.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
convmodel.add(Dropout(0.25))

convmodel.add(Flatten())

convmodel.add(Dense(256, activation='tanh'))
convmodel.add(Dropout(0.3))

convmodel.add(Dense(2, activation='softmax'))

convmodel.compile(loss='categorical_crossentropy',
              optimizer=Adam(lr=0.0001, decay=1e-6),
              metrics=['accuracy'])
model.fit(X_train, Y_train,  #size of x_train is (66000,100,100) 
          batch_size=128,
          shuffle=True,
          epochs=10,
          validation_data=(X_test, Y_test),
          callbacks=[EarlyStopping(min_delta=0.00025, patience=2)])

0 个答案:

没有答案