Tensorflow模型出现错误“ ValueError:形状(无,5)和(无,500)不兼容”

时间:2020-06-13 17:02:07

标签: tensorflow

我试图用LSTM层构建一个简单的多输入单输出模型。也生成了一些适合的数据。基本上,我有一个参考语料库,每个长度为100,一个问题语料库,每个文档的长度为25,还有一些答案,长度为5。

TEXT_VOCAB_SIZE, QUESTION_VOCAB_SIZE = 10000, 25
ANSWER_VOCAB_SIZE = 500

max_samples, max_length, max_qn_length = 1000, 100, 25
max_ans_length = 5

text_corpus = np.random.randint(1, TEXT_VOCAB_SIZE,
                               size=(max_samples, max_length))
questions_corpus = np.random.randint(1, QUESTION_VOCAB_SIZE,
                               size=(max_samples, max_qn_length))
answers_corpus = np.random.randint(1,ANSWER_VOCAB_SIZE,
                                  size=(max_samples, max_ans_length))

backend.clear_session()
m31_corpus_input = Input(shape=(max_length,), dtype='int32')
m31_qn_input = Input(shape=(max_qn_length,), dtype='int32')

m31_corpus_emb = layers.Embedding(64, TEXT_VOCAB_SIZE)(m31_corpus_input)
m31_qn_emb = layers.Embedding(64, QUESTION_VOCAB_SIZE)(m31_qn_input)

m31_corpus_lstm = layers.LSTM(32)(m31_corpus_emb)
m31_qn_lstm = layers.LSTM(32)(m31_qn_emb)

m31_concat = layers.concatenate([m31_corpus_lstm, m31_qn_lstm], axis=-1)
# m31_concat = layers.Concatenate()([m31_corpus_lstm, m31_qn_lstm])
m31_ans = layers.Dense(ANSWER_VOCAB_SIZE, activation='softmax')(m31_concat)
m31 = models.Model(inputs=[m31_corpus_input, m31_qn_input], outputs=m31_ans)
print(m31.summary())

m31.compile(optimizer='rmsprop', 
            loss='categorical_crossentropy',
            metrics=['acc'])

m31.fit([text_corpus, questions_corpus], 
        answers_corpus, epochs=10, batch_size=64,
       validation_split=0.2)

运行代码时出现以下错误

ValueError: Shapes (None, 5) and (None, 500) are incompatible

曾经在此模型中调整过不同的值,但仍然无法得到答案,以证明这是不正确的。

1 个答案:

答案 0 :(得分:0)

我已经解决了:

TEXT_VOCAB_SIZE, QUESTION_VOCAB_SIZE, ANSWER_VOCAB_SIZE = 10000, 25, 500
max_length, max_qn_length, max_ans_length = 100, 25, 5
max_samples = 1000


text_corpus = np.random.randint(1, TEXT_VOCAB_SIZE,
                               size=(max_samples, max_length))
questions_corpus = np.random.randint(1, QUESTION_VOCAB_SIZE,
                               size=(max_samples, max_qn_length))
answers_corpus = np.random.randint(0,ANSWER_VOCAB_SIZE,
                                  size=(max_samples,))
answers_corpus = to_categorical(answers_corpus)


backend.clear_session()
m31_corpus_input = Input(shape=(max_length,), dtype='int32')
m31_qn_input = Input(shape=(max_qn_length,), dtype='int32')

m31_corpus_emb = layers.Embedding(TEXT_VOCAB_SIZE, 64)(m31_corpus_input)
m31_qn_emb = layers.Embedding(QUESTION_VOCAB_SIZE, 64)(m31_qn_input)

m31_corpus_lstm = layers.LSTM(32)(m31_corpus_emb)
m31_qn_lstm = layers.LSTM(32)(m31_qn_emb)

m31_concat = layers.Concatenate()([m31_corpus_lstm, m31_qn_lstm])
m31_ans = layers.Dense(ANSWER_VOCAB_SIZE, activation='softmax')(m31_concat)
m31 = models.Model(inputs=[m31_corpus_input, m31_qn_input], outputs=m31_ans)
print(m31.summary())


m31.compile(optimizer='rmsprop', 
            loss='categorical_crossentropy',
            metrics=['acc'])

m31.fit([text_corpus, questions_corpus], answers_corpus, 
        epochs=10, batch_size=128)