我已经训练了一个用于文本分类的模型(在这种情况下,每个输入实际上都是使用形状为(300,)的预训练单词嵌入将一个单词转换为数字)。问题是模型训练和评估都很好(或者正如我可能看到的那样),但是当使用预测函数时,无论我提供多少输入,所有预测都是相同的,换句话说,第一个预测元素的预测是正确的,但其他所有元素的返回值都相同。
def define_model(vocab_size, max_length):
model = Sequential()
model.add(Embedding(vocab_size, 300, input_length=X.shape[1], weights=[embedding_matrix]))
model.add(Conv1D(filters=32, kernel_size=8, activation='selu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(10, activation='relu' ))
model.add(Dense(1, activation='sigmoid' ))
model. compile(loss='binary_crossentropy' , optimizer='adam' , metrics=['accuracy' ])
model.summary()
return model
model = define_model(vocab_size, max_length)
checkpoint = ModelCheckpoint("object.h5", monitor='val_acc', save_best_only=True, mode='max')
model.fit(X_train, y_train, epochs=20, validation_data = (X_val, y_val), batch_size=256, callbacks=
[checkpoint])
model.save('saved models/Keras/object.h5' )
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_9 (Embedding) (None, 300, 300) 3361200
_________________________________________________________________
conv1d_6 (Conv1D) (None, 293, 32) 76832
_________________________________________________________________
max_pooling1d_6 (MaxPooling1 (None, 146, 32) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 4672) 0
_________________________________________________________________
dense_11 (Dense) (None, 10) 46730
_________________________________________________________________
dense_12 (Dense) (None, 1) 11
=================================================================
Total params: 3,484,773
Trainable params: 3,484,773
Non-trainable params: 0
_________________________________________________________________
Train on 10102 samples, validate on 2526 samples
Epoch 1/20
10102/10102 [==============================] - 36s 4ms/step - loss: 0.6943 - acc: 0.5282 - val_loss:
0.6922 - val_acc: 0.5313
Epoch 2/20
10102/10102 [==============================] - 33s 3ms/step - loss: 0.6899 - acc: 0.5301 - val_loss:
0.6952 - val_acc: 0.5313
Epoch 3/20
10102/10102 [==============================] - 34s 3ms/step - loss: 0.6823 - acc: 0.5383 - val_loss:
0.7346 - val_acc: 0.3915
Epoch 4/20
10102/10102 [==============================] - 33s 3ms/step - loss: 0.6441 - acc: 0.6352 - val_loss:
0.7665 - val_acc: 0.3670
Epoch 5/20
10102/10102 [==============================] - 33s 3ms/step - loss: 0.5570 - acc: 0.6949 - val_loss:
1.3648 - val_acc: 0.2989
Epoch 6/20
10102/10102 [==============================] - 33s 3ms/step - loss: 0.4524 - acc: 0.7762 - val_loss:
1.1479 - val_acc: 0.3187
Epoch 7/20
10102/10102 [==============================] - 32s 3ms/step - loss: 0.3823 - acc: 0.7928 - val_loss:
1.5121 - val_acc: 0.3032
Epoch 8/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.3224 - acc: 0.8039 - val_loss:
1.6736 - val_acc: 0.2977
Epoch 9/20
10102/10102 [==============================] - 36s 4ms/step - loss: 0.2839 - acc: 0.8130 - val_loss:
2.1001 - val_acc: 0.3064
Epoch 10/20
10102/10102 [==============================] - 33s 3ms/step - loss: 0.2705 - acc: 0.8132 - val_loss:
2.3334 - val_acc: 0.2977
Epoch 11/20
10102/10102 [==============================] - 34s 3ms/step - loss: 0.2618 - acc: 0.8159 - val_loss:
2.6393 - val_acc: 0.2985
Epoch 12/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2579 - acc: 0.8137 - val_loss:
2.7506 - val_acc: 0.2997
Epoch 13/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2573 - acc: 0.8164 - val_loss:
2.8572 - val_acc: 0.2989
Epoch 14/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2559 - acc: 0.8173 - val_loss:
2.9541 - val_acc: 0.3021
Epoch 15/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2544 - acc: 0.8170 - val_loss:
3.0950 - val_acc: 0.3040
Epoch 16/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2546 - acc: 0.8166 - val_loss:
3.0565 - val_acc: 0.3013
Epoch 17/20
10102/10102 [==============================] - 36s 4ms/step - loss: 0.2547 - acc: 0.8177 - val_loss:
3.1455 - val_acc: 0.3040
Epoch 18/20
10102/10102 [==============================] - 35s 3ms/step - loss: 0.2572 - acc: 0.8179 - val_loss:
2.9103 - val_acc: 0.3044
Epoch 19/20
10102/10102 [==============================] - 37s 4ms/step - loss: 0.2533 - acc: 0.8177 - val_loss:
3.3200 - val_acc: 0.3040
Epoch 20/20
10102/10102 [==============================] - 36s 4ms/step - loss: 0.2553 - acc: 0.8176 - val_loss:
3.1079 - val_acc: 0.3100
model = load_model('saved models/Keras/object.h5' )
_, acc = model.evaluate(X_train, y_train)
print('Train Accuracy: %.2f' % (acc*100))
_, acc = model.evaluate(X_test, y_test)
print('Test Accuracy: %.2f' % (acc*100))
10102/10102 [==============================] - 13s 1ms/step
Train Accuracy: 81.83
3158/3158 [==============================] - 4s 1ms/step
Test Accuracy: 31.19
预测:
该模型中的同一问题发生在我的其他模型中,该模型使用keras标记器对句子进行分类。