keras LSTM模型输入和输出尺寸不匹配

时间:2017-07-06 10:37:27

标签: python-3.x machine-learning deep-learning keras mismatch

model = Sequential()

    model.add(Embedding(630, 210))
    model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
    model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
    model.add(Dense(210, activation = 'softmax'))

    model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])

    filepath = 'ner_2-{epoch:02d}-{loss:.5f}.hdf5'
    checkpoint = ModelCheckpoint(filepath, monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min')
    callback_list = [checkpoint]

    model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)    

X:输入矢量的形状为(204564,630,1)

y:目标矢量具有形状(204564,210,1)

即。对于每630个输入,必须预测210个输出,但代码在编译时抛出以下错误

ValueError                                Traceback (most recent call last)
<ipython-input-57-05a6affb6217> in <module>()
     50 callback_list = [checkpoint]
     51 
---> 52 model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)
     53 print('successful')



ValueError: Error when checking model input: expected embedding_8_input to have 2 dimensions, but got array with shape (204564, 630, 1)

请有人解释为什么会出现此错误以及如何解决此问题

1 个答案:

答案 0 :(得分:1)

消息说:

  

您的第一个图层需要输入2维:(BatchSize,SomeOtherDimension)。但是您的输入有3个维度(BatchSize = 204564,SomeOtherDimension = 630,1)。

嗯...从你的输入中删除1,或者在模型中重塑它:

解决方案1 ​​ - 从输入中删除

X = X.reshape((204564,630))

解决方案2 - 添加重塑层:

model = Sequential()
model.add(Reshape((630,),input_shape=(630,1)))
model.add(Embedding.....)
相关问题