我想建立Automatic Sleep Stage Classification using Convolutional Neural Networks with Long Short-Term Memory中所述的CNN-LSTM网络以进行睡眠阶段分类。想要实现其附录中所述的模型体系结构(如上图所示)。
我试图通过实现附录中提到的输出暗淡来实现相同的功能,但是我在代码的最后阶段遇到了错误。
即使在我的数据上(拟合后),conv1d部分似乎也能很好地工作。问题始于LSTM部分。如果您可以同时研究CNN和LSTM部分的体系结构,那还是很棒的,因为我不太确定问题实际上从哪里开始。提前谢谢。
from keras.layers import Input, Conv1D
from keras.layers import Dropout, BatchNormalization
inputs = Input(shape=(2800,1))
from keras import regularizers
block1 = Conv1D( kernel_size=(50,),filters=128, strides=5, kernel_regularizer=regularizers.l2(0.05))(inputs)
#block1 = Dropout(0.2)(block1)
#block1 = BatchNormalization()(block1)
block1.get_shape
block2 = Conv1D(strides=1,kernel_size=5, filters=256, kernel_regularizer=regularizers.l2(0.01))(block1)
block2 = Dropout(0.2)(block2)
block2 = BatchNormalization()(block2)
block2.get_shape
from keras.layers import MaxPooling1D
block2_output = MaxPooling1D( strides=2,)(block2)
block2_output.get_shape
block3 = Conv1D(kernel_size=5, filters=300, strides=2, kernel_regularizer=regularizers.l2(0.01))(block2_output)
block3 = Dropout(0.2)(block3)
block3 = BatchNormalization()(block3)
block3.get_shape
block3_output = MaxPooling1D(pool_size=3, strides=2)(block3)
block3_output.get_shape
from keras.layers import Dense
from keras.layers import Flatten
import numpy as np
block4=Flatten()(block3_output)
block4=Dense(1500,)(block4)
block4=Dropout(0.5)(block4)
block4=BatchNormalization()(block4)
block4.shape
block5=Dense(1500,)(block4)
block5=Dropout(0.5)(block5)
block5=BatchNormalization()(block5)
block5.shape
block6=Dense(5, activation='softmax')(block5)
block6.shape
from keras.models import Model
model = Model(inputs=inputs, outputs=block6)
model.summary()
# LSTM STARTS HERE
lstminputs=inputs = Input(shape=(1500,1))
import tensorflow as tf
from keras.layers import LSTM
block6_new = tf.reshape(block6,(100,1,1))
block7=LSTM(1,recurrent_dropout=0.3)(block6_new)
block7=Dropout(0.3)(block7)
block7.shape
block7_new = tf.reshape(block7,(100,1,1))
block8=LSTM(1,recurrent_dropout=0.3)(block7_new)
block8=Dropout(0.3)(block8)
block8.shape
block8_new = tf.reshape(block8,(1,100))
block9=Dense(5, activation='softmax')(block8_new)
block9.shape
block9 = tf.reshape(block9,(5,1))
block9.shape
lstmmodel=Model(inputs=lstminputs, outputs=block9)
在上面的行中弹出以下错误:
ValueError:模型的输出张量必须是Keras的输出
Layer
(因此保留过去的图层元数据)。发现: Tensor(“ Reshape_26:0”,shape =(5,1),dtype = float32)