Keras-LSTM密集层中的错误输入形状

时间:2018-09-13 17:00:29

标签: python keras lstm text-classification

我正在尝试使用lstm构建一个Keras文本分类器。

这是模型结构:

model_word2vec = Sequential()
model_word2vec.add(Embedding(input_dim=vocabulary_dimension,
                    output_dim=embedding_dim,
                    weights=[word2vec_weights,
                    input_length=longest_sentence,
                    mask_zero=True,
                    trainable=False))
model_word2vec.add(LSTM(units=embedding_dim, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_word2vec.add(Dense(3, activation='softmax'))
model_word2vec.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


results = model_word2vec.fit(X_tr_word2vec, y_tr_word2vec, validation_split=0.16, epochs=3, batch_size=128, verbose=0)

y_tr_word2vec是3维one-hot编码变量。

运行上面的代码时,出现此错误:

ValueError: Error when checking model target: expected dense_2 to have 3 dimensions, but got array with shape (15663, 3)

我认为问题可能与y_tr_word2vec形状或batch size尺寸有关,但我不确定。

更新

我已将return_sequences=Falsey_tr_word2vecone-hot更改为categorical1神经元的致密层,现在我正在使用sparse_categorical_crossentropy而不是categorical_crossentropy

现在,我收到此错误:ValueError: invalid literal for int() with base 10: 'countess'

因此,我现在假设在fit()期间,包含句子的输入向量X_tr_word2vec出了问题。

1 个答案:

答案 0 :(得分:1)

问题是此代码

model_word2vec.add(LSTM(units=dim_embedding, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_word2vec.add(Dense(3, activation='softmax'))

您已经设置了return_sequences=True,这意味着LSTM将3D数组返回到密集层,而密集不需要3D数据...因此删除return_sequences = True

model_word2vec.add(LSTM(units=dim_embedding, dropout=0.25, recurrent_dropout=0.25))
model_word2vec.add(Dense(3, activation='softmax'))

您为什么设置return_sequences = True?