如何使用Keras实现word2vec网络?

时间:2019-05-02 08:55:30

标签: python keras word2vec

我想在word2vec项目的Keras中实现以下网络:

p

以下是我在d=50V=vocab_size处实现它的尝试:

ip_shape1 = Input(shape=(vocab_size,))
ip_shape2 = Input(shape=(vocab_size,))
ip_shape3 = Input(shape=(vocab_size,))


shared_layer = Dense(50, activation = "sigmoid")

op1 = shared_layer(ip_shape1)
op2 = shared_layer(ip_shape2)
op3 = shared_layer(ip_shape3)      

projection_layer = Concatenate()([op1, op2, op3])
hidden_layer = Dense(vocab_size+100)(projection_layer)
output_layer = Dense(vocab_size, activation='softmax')(hidden_layer)

model = Model(inputs=[ip_shape1, ip_shape2, ip_shape3], outputs=output_layer)

首先,我想知道我是否正确实施了网络?然后,我必须提取共享层E以获得单词的向量。我如何提取它?


以下也是我准备训练数据以供网络使用的方式:

tokenizer = Tokenizer()
tokenizer.fit_on_texts(data)

encoded = tokenizer.texts_to_sequences(data)
# determine the vocabulary size
vocab_size = len(tokenizer.word_index) + 1

sequences = list()
for i in range(1, len(encoded)):
    sent = encoded[i]
    _4grams = list(nltk.ngrams(sent, n=4))
    for gram in _4grams:
        sequences.append(gram)

# split into X and y elements
sequences = np.array(sequences)
# print("seq2:", sequences)
X, y = sequences[:, 0:3], sequences[:, 3]
X = to_categorical(X, num_classes=vocab_size)
y = to_categorical(y, num_classes=vocab_size)

Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, y, test_size=0.3, random_state=42)
# one hot encode outputs

这是我将其馈送到网络的方式:

history = model.fit([Xtrain[:,0,:],Xtrain[:,1,:], Xtrain[:,2,:]], Ytrain, epochs=10, verbose=1, 
                        validation_data=([Xtest[:,0,:],Xtest[:,1,:], Xtest[:,2,:]], Ytest))

它们是否如上图所示?

0 个答案:

没有答案