我想对变化长度的句子实施cnn句子分类和填充0。
但这不是合理的,因为0将被视为词汇。
为解决此问题,我想将0映射的权重全为零,将其映射到嵌入层中的单词向量,其他语音仍然可以训练。
参考文件:https://arxiv.org/abs/1408.5882
以下是我当前的代码:
w2v_weight = np.array(list(data['id_vec'].values()))
# add zeros in first dim:
zero = np.zeros((1,w2v_weight.shape[1]))
w2v_weight = np.concatenate((zero,w2v_weight),axis=0)
embedding_layer = Embedding(len(data["word_id"]) + 1, 300, weights= [w2v_weight],
input_length=data['x_test'].shape[1], trainable=True)
embedding_layer2 = Embedding(len(data["word_id"]) + 1, 300, weights= [w2v_weight],
input_length=data['x_test'].shape[1], trainable=False)
model_input = Input(shape=(None,),dtype='int32')
embedded_sequences2 = embedding_layer2(model_input)
embedded_sequences1 = embedding_layer(model_input)
ebd_cct = Concatenate()([embedded_sequences1,embedded_sequences2])
conv1 = Convolution1D(filters=100,kernel_size = 3,padding="same")(ebd_cct)
conv2 = Convolution1D(filters=100,kernel_size = 4,padding="same")(ebd_cct)
conv3 = Convolution1D(filters=100,kernel_size = 5,padding="same")(ebd_cct)
conv_a = Concatenate()([conv1,conv2,conv3])
conv_a = Activation("relu")(conv_a)
conv_add = GlobalMaxPool1D()(conv_a)
z = Dropout(0.5)(conv_add)
model_output = Dense(4, activation="softmax",kernel_constraint = max_norm(3.))(z)
model_1_two = Model(model_input, model_output)
model_1_two.summary()
model_1_two.compile(loss="categorical_crossentropy", optimizer="Adadelta",
metrics=['acc'])
history_1_two = model_1_two.fit(data["x_train"], data["y_train"],shuffle=True,
callbacks = [EarlyStopping(monitor="val_acc",patience=50)],
batch_size=50, epochs=20000,validation_data=(data["x_test"], data["y_test"]))
答案 0 :(得分:0)
根据您的代码,我假设您正在使用Keras。然后,您可以通过将0
设置为mask_zero
,将True
的使用定义为词汇外(OOV)索引:
embedding_layer = Embedding(len(data["word_id"]) + 1, 300, weights= [w2v_weight],
input_length=data['x_test'].shape[1], trainable=True,
mask_zero=True)
有关更多信息,请参见documentation for Embedding()
。