使用Tensorflow后端的Keras模型 - 关于预测抛出无效参数错误

时间:2017-08-17 06:04:46

标签: python tensorflow deep-learning keras data-science

我使用"手套训练了Keras模型.42B.300d.txt"对于单词向量。模型训练顺利进行,同时预测我得到错误:

InvalidArgumentError: indices[18,28] = 137077 is not in [0, 137077) [[Node: embedding_1_1/Gather = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](embedding_1/embeddings/read, _arg_input_2_0_2)]].

Caused by op u'embedding_1_1/Gather
  

这是因为张量流的版本冲突我有1.2.1吗?您的   在这方面的指导非常感谢。感谢

模型

embedding_layer = Embedding(nb_words,
                            EMBEDDING_DIM,
                            weights=[embedding_matrix],
                            input_length=MAX_SEQUENCE_LENGTH,
                            trainable=False)
  

模型架构

sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = Conv1D(128, 3, activation='relu')(embedded_sequences_1)
x1 = MaxPooling1D(10)(x1)
x1 = Flatten()(x1)
x1 = Dense(64, activation='relu')(x1)
x1 = Dropout(0.2)(x1)

sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = Conv1D(128, 3, activation='relu')(embedded_sequences_2)
y1 = MaxPooling1D(10)(y1)
y1 = Flatten()(y1)
y1 = Dense(64, activation='relu')(y1)
y1 = Dropout(0.2)(y1)

merged = merge([x1,y1], mode='concat')
merged = BatchNormalization()(merged)
merged = Dense(64, activation='relu')(merged)
merged = Dropout(0.2)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(input=[sequence_1_input,sequence_2_input], output=preds)
model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['acc'])

数据形态

  • test_data_1:2345796x30
  • test_data_2:2345796x30
  • train_data_1:404290x30
  • train_data_2:404290x30

0 个答案:

没有答案