我为用户提供个人资料图片和时间序列数据(该用户生成的事件)。 为了进行二进制分类,我编写了两个模型:LSTM和CNN,它们可以独立工作。但是我真正想要实现的是串联这些模型。
这是我的LSTM模型:
input1_length = X_train.shape[1]
input1_dim = X_train.shape[2]
input2_length = X_inter_train.shape[1]
input2_dim = X_inter_train.shape[2]
output_dim = 1
input1 = Input(shape=(input1_length, input1_dim))
input2 = Input(shape=(input2_length, input2_dim))
lstm1 = LSTM(20)(input1)
lstm2 = LSTM(10)(input2)
lstm1 = Dense(256, activation='relu')(lstm1)
lstm1 = Dropout(0.5)(lstm1)
lstm1 = Dense(12, activation='relu')(lstm1)
lstm2 = Dense(256, activation='relu')(lstm2)
#lstm2 = Dropout(0.5)(lstm2)
lstm2 = Dense(12, activation='relu')(lstm2)
merge = concatenate([lstm1, lstm2])
# interpretation model
lstm = Dense(128, activation='relu')(merge)
output = Dense(output_dim, activation='sigmoid')(lstm)
model = Model([input1, input2], output)
optimizer = RMSprop(lr=1e-3, decay=0.0)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
CNN模型:
def gen_img_model(input_dim=(75,75,3)):
input = Input(shape=input_dim)
conv = Conv2D(32, kernel_size=(3,3), activation='relu')(input)
conv = MaxPooling2D((3,3))(conv)
conv = Dropout(0.2)(conv)
conv = BatchNormalization()(conv)
dense = Dense(128, activation='relu', name='img_features')(conv)
dense = Dropout(0.2)(dense)
output = Dense(1, activation='sigmoid')(dense)
optimizer = RMSprop(lr=1e-3, decay=0.0)
model = Model(input, output)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
以下是CNN的训练方式:
checkpoint_name = './keras_img_checkpoint/img_model'
callbacks = [ModelCheckpoint(checkpoint_name, save_best_only=True)]
img_model = gen_img_model((75,75,3))
# batch size for img model
batch_size = 200
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
val_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# train gen for img model
train_generator = train_datagen.flow_from_directory(
'./dataset/train/',
target_size=(75, 75),
batch_size=batch_size,
class_mode='binary')
val_generator = val_datagen.flow_from_directory(
'./dataset/val/',
target_size=(75, 75),
batch_size=batch_size,
class_mode='binary')
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VAL = val_generator.n // val_generator.batch_size
img_model.fit_generator(
train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=800 // batch_size,
epochs=1,
verbose=1,
callbacks=callbacks
)
将LSTM和CNN模型连接在一起的最佳方法是什么?
答案 0 :(得分:0)
您可以使用Keras在一个模型中添加CNN和LSTM层。您可能会遇到形状问题。
示例:
def CNN_LSTM():
model = Sequential()
model.add(Convolution2D(input_shape = , filters = , kernel_size =
, activation = )
model.add(LSTM(units = , )
return model
您只需添加参数。 希望这可以帮助。
答案 1 :(得分:0)
This is how you can merge two Deep learning models.
model1 = Sequential()
#input
model1.add(Dense(32, input_shape=(NUM_FEAT1,1)))
model1.add(Activation("elu"))
model1.add(Dropout(0.5))
model1.add(Dense(16))
model1.add(Activation("elu"))
model1.add(Dropout(0.25))
model1.add(Flatten())
model2 = Sequential()
#input
model2.add(Dense(32, input_shape=(NUM_FEAT1,1)))
model2.add(Activation("elu"))
model2.add(Dropout(0.5))
model2.add(Dense(16))
model2.add(Activation("elu"))
model2.add(Dropout(0.25))
model2.add(Flatten())
merged = Concatenate()([model1.output,model2.output])
z = Dense(128, activation="relu")(merged)
z = Dropout(0.25)(z)
z = Dense(1024, activation="relu")(z)
z = Dense(1, activation="sigmoid")(z)
model = Model(inputs=[model1.input, model2.input], outputs=z)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([x_train[train_index][:,:66], x_train[train_index][:,66:132], y_train[train_index], batch_size=100, epochs=100, verbose=2)
通过这种方式,您可以根据需要向模型提供2种不同类型的数据,例如第一个模型中的图像和第二个模型中的文本数据。
答案 2 :(得分:-1)
我认为这不能完全回答您的问题,但您可以考虑在数据集上运行数十个ML模型,然后查看哪种模型效果最好,而不是仅这样做。您可以将AoutML或DataRobot用于这些任务。
https://heartbeat.fritz.ai/automl-the-next-wave-of-machine-learning-5494baac615f