我有一些长的1_D向量序列(3000位数),我要对其进行分类。之前,我已经实现了一个简单的CNN,可以相对成功地对它们进行分类:
def create_shallow_model(shape,repeat_length,stride):
model = Sequential()
model.add(Conv1D(75,repeat_length,strides=stride,padding='same', input_shape=shape, activation='relu'))
model.add(MaxPooling1D(repeat_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
return model
但是我希望通过在网络末端堆叠LSTM / RNN来提高性能。
我对此感到困难,因为我似乎找不到网络接受数据的方法。
def cnn_lstm(shape,repeat_length,stride):
model = Sequential()
model.add(TimeDistributed(Conv1D(75,repeat_length,strides=stride,padding='same', activation='relu'),input_shape=(None,)+shape))
model.add(TimeDistributed(MaxPooling1D(repeat_length)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(6,return_sequences=True))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
return model
model=cnn_lstm(X.shape[1:],1000,1)
tprs,aucs=calculate_roc(model,3,100,train_X,train_y,test_X,test_y,tprs,aucs)
但是出现以下错误:
ValueError: Error when checking input: expected time_distributed_4_input to have 4 dimensions, but got array with shape (50598, 3000, 1)
我的问题是:
这是分析数据的正确方法吗?
如果是这样,我如何使网络接受并分类输入序列?
答案 0 :(得分:4)
无需添加这些TimeDistributed
包装器。当前,在添加LSTM层之前,您的模型如下所示(我假设使用repeat_length=5
和stride=1
):
Layer (type) Output Shape Param #
=================================================================
conv1d_2 (Conv1D) (None, 3000, 75) 450
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 600, 75) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 45000) 0
_________________________________________________________________
dense_4 (Dense) (None, 1) 45001
=================================================================
Total params: 45,451
Trainable params: 45,451
Non-trainable params: 0
_________________________________________________________________
因此,如果要添加LSTM层,可以将其放在MaxPooling1D
层之后,例如model.add(LSTM(16, activation='relu'))
,而只需删除Flatten
层。现在模型看起来像这样:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_4 (Conv1D) (None, 3000, 75) 450
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 600, 75) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 16) 5888
_________________________________________________________________
dense_5 (Dense) (None, 1) 17
=================================================================
Total params: 6,355
Trainable params: 6,355
Non-trainable params: 0
_________________________________________________________________
如果需要,可以将return_sequences=True
参数传递到LSTM
层,并保留Flatten
层。但是只有在尝试了第一种方法并且结果很差之后才做这样的事情,因为添加return_sequences=True
可能根本没有必要,并且只会增加模型大小并降低模型性能。
请注意:为什么在第二个模型中将损失函数更改为sparse_categorical_crossentropy
?无需这样做,因为binary_crossentropy
可以正常工作。