模型的输出张量必须是Keras图层的输出(因此保留过去的图层元数据)。当对CNN LSTM使用功能性api时

时间:2020-01-31 15:43:41

标签: python keras

我正在尝试使用时间分布进行简单的cnn-lstm分类,但是出现以下错误: 模型的输出张量必须是Keras Layer的输出(因此保留了过去的层元数据)。找到:

我的样本是366个通道的灰度图像,尺寸为5x5,每个样本都有自己的唯一标签。

model_input = Input(shape=(366,5,5))

model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(model_input))
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))

model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))


model = Flatten()

model = LSTM(256, return_sequences=False, dropout=0.5)
model =  Dense(128, activation='relu')


model = Dense(6, activation='softmax')

cnnlstm = Model(model_input, model)
cnnlstm.compile(optimizer='adamax',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])
cnnlstm.summary()

1 个答案:

答案 0 :(得分:0)

您必须在各层之间传递张量,因为这是使用Layer(params...)(input)表示法的功能API对于所有层的工作方式:

model_input = Input(shape=(366,5,5))

model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first'))(model_input)
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))(model)

model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))(model)
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))(model)


model = TimeDistributed(Flatten())(model)

model = LSTM(256, return_sequences=False, dropout=0.5)(model)
model =  Dense(128, activation='relu')(model)


model = Dense(6, activation='softmax')(model)

cnnlstm = Model(model_input, model)

请注意,由于张量位于错误的部分,因此我也纠正了第一TimeDistributed层。