我正在尝试实施CNN网络进行句子分类;我正在尝试遵循paper中提出的架构。我正在使用Keras(带张量流)。以下是我的模型摘要:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_4 (InputLayer) (None, 56) 0
____________________________________________________________________________________________________
embedding (Embedding) (None, 56, 300) 6510000
____________________________________________________________________________________________________
dropout_7 (Dropout) (None, 56, 300) 0
____________________________________________________________________________________________________
conv1d_10 (Conv1D) (None, 54, 100) 90100
____________________________________________________________________________________________________
conv1d_11 (Conv1D) (None, 53, 100) 120100
____________________________________________________________________________________________________
conv1d_12 (Conv1D) (None, 52, 100) 150100
____________________________________________________________________________________________________
max_pooling1d_10 (MaxPooling1D) (None, 27, 100) 0
____________________________________________________________________________________________________
max_pooling1d_11 (MaxPooling1D) (None, 26, 100) 0
____________________________________________________________________________________________________
max_pooling1d_12 (MaxPooling1D) (None, 26, 100) 0
____________________________________________________________________________________________________
flatten_10 (Flatten) (None, 2700) 0
____________________________________________________________________________________________________
flatten_11 (Flatten) (None, 2600) 0
____________________________________________________________________________________________________
flatten_12 (Flatten) (None, 2600) 0
____________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 7900) 0
____________________________________________________________________________________________________
dropout_8 (Dropout) (None, 7900) 0
____________________________________________________________________________________________________
dense_7 (Dense) (None, 50) 395050
____________________________________________________________________________________________________
dense_8 (Dense) (None, 5) 255
====================================================================================================
Total params: 7,265,605.0
Trainable params: 7,265,605.0
Non-trainable params: 0.0
我无法理解过度拟合的原因是什么,请建议我对架构进行一些更改以避免这种情况。如果您需要更多信息,请告诉我。
源代码:
if model_type in ['CNN-non-static', 'CNN-static']:
embedding_wts = train_word2vec( np.vstack((x_train, x_test, x_valid)),
ind_to_wrd, num_features = embedding_dim)
if model_type == 'CNN-static':
x_train = embedding_wts[0][x_train]
x_test = embedding_wts[0][x_test]
x_valid = embedding_wts[0][x_valid]
elif model_type == 'CNN-rand':
embedding_wts = None
else:
raise ValueError("Unknown model type")
batch_size = 50
filter_sizes = [3,4,5]
num_filters = 75
dropout_prob = (0.5, 0.8)
hidden_dims = 50
l2_reg = 0.3
# Deciding dimension of input based on the model
input_shape = (max_sent_len, embedding_dim) if model_type == "CNN-static" else (max_sent_len,)
model_input = Input(shape = input_shape)
# Static model do not have embedding layer
if model_type == "CNN-static":
z = Dropout(dropout_prob[0])(model_input)
else:
z = Embedding(vocab_size, embedding_dim, input_length = max_sent_len, name="embedding")(model_input)
z = Dropout(dropout_prob[0])(z)
# Convolution layers
z1 = Conv1D( filters=num_filters, kernel_size=3,
padding="valid", activation="relu",
strides=1)(z)
z1 = MaxPooling1D(pool_size=2)(z1)
z1 = Flatten()(z1)
z2 = Conv1D( filters=num_filters, kernel_size=4,
padding="valid", activation="relu",
strides=1)(z)
z2 = MaxPooling1D(pool_size=2)(z2)
z2 = Flatten()(z2)
z3 = Conv1D( filters=num_filters, kernel_size=5,
padding="valid", activation="relu",
strides=1)(z)
z3 = MaxPooling1D(pool_size=2)(z3)
z3 = Flatten()(z3)
# Concatenate the output of all convolution layers
z = Concatenate()([z1, z2, z3])
z = Dropout(dropout_prob[1])(z)
# Dense(64, input_dim=64, kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.01))
z = Dense(hidden_dims, activation="relu", kernel_regularizer=regularizers.l2(0.01))(z)
model_output = Dense(N_category, activation="sigmoid")(z)
model = Model(model_input, model_output)
model.compile(loss="categorical_crossentropy", optimizer=optimizers.Adadelta(lr=1, decay=0.005), metrics=["accuracy"])
model.summary()
答案 0 :(得分:1)
如果不深入研究模型,我会说你应该尝试不训练嵌入并重用其中一个可下载矩阵。即使你削减了它,你的参数仍然几乎是数据点的10倍,所以模型必然会过度拟合。
应该是相反的方式。对于800k参数,您应该有8M数据点。
如果查看图表,验证松动会在第一个(几个)时期内下降,然后上升,这是另一个没有足够数据的迹象。
答案 1 :(得分:1)
从模型摘要看起来您正在处理具有5个输出类的多类问题。对于多级设置,softmax
非常适合sigmoid
。我想这是性能非常差的主要原因。如果您使用sigmoid和categorical_crossentropy设置网络,那么在训练期间通过简单地将所有类别预测为1可以实现100%的训练准确性,但是这种假设在测试数据上严重失败。
如果您正在使用二进制分类问题,那么binary_crossentropy是丢失函数的不错选择。
其他一些有用的建议可能会有所帮助,但这完全取决于您的建模假设。