Gensim Word2Vec最相似的不同结果python

时间:2018-03-21 09:08:39

标签: python string word2vec gensim word-embedding

我有第一本txt格式的哈利波特书。由此,我创建了两个新的txt文件:第一个,Hermione的所有副本都被Hermione_1替换;在第二个中,Hermione的所有出现都已被Hermione_2取代。然后我连接这两个文本来创建一个长文本,我用它作为Word2Vec的输入。 这是我的代码:

import os
from gensim.models import Word2Vec
from gensim.models import KeyedVectors

with open("HarryPotter1.txt", 'r') as original, \
        open("HarryPotter1_1.txt", 'w') as mod1, \
        open("HarryPotter1_2.txt", 'w') as mod2:

    data=original.read()
    data_1 = data.replace("Hermione", 'Hermione_1')
    data_2 = data.replace("Hermione", 'Hermione_2')
    mod1.write(data_1 + r"\n")
    mod2.write(data_2 + r"\n")

with open("longText.txt",'w') as longFile:
    with open("HarryPotter1_1.txt",'r') as textfile:
        for line in textfile:
            longFile.write(line)
    with open("HarryPotter1_2.txt",'r') as textfile:
        for line in textfile:
            longFile.write(line)


model = ""
word_vectors = ""
modelName = "ModelTest"
vectorName = "WordVectorsTestst"

answer2 = raw_input("Overwrite  embeddig? (yes or n)")
if(answer2 == 'yes'):
    with open("longText.txt",'r') as longFile:
        sentences = []
        single= []
        for line in longFile:
            for word in line.split(" "):
                single.append(word)
            sentences.append(single)

    model = Word2Vec(sentences,workers=4, window=5,min_count=5)

    model.save(modelName)
    model.wv.save_word2vec_format(vectorName+".bin",binary=True)
    model.wv.save_word2vec_format(vectorName+".txt", binary=False)
    model.wv.save(vectorName)

    word_vectors = model.wv

else:
    model = Word2Vec.load(modelName)
    word_vectors = KeyedVectors.load_word2vec_format(vectorName + ".bin", binary=True)

    print(model.wv.similarity("Hermione_1","Hermione_2"))
    print(model.wv.distance("Hermione_1","Hermione_2"))
    print(model.wv.most_similar("Hermione_1"))
    print(model.wv.most_similar("Hermione_2"))

model.wv.most_similar("Hermione_1")model.wv.most_similar("Hermione_2")如何为我提供不同的输出? 他们的邻居完全不同。这是四个印刷品的输出:

0.00799602753634
0.992003972464
[('moments,', 0.3204237222671509), ('rose;', 0.3189219534397125), ('Peering', 0.3185565173625946), ('Express,', 0.31800806522369385), ('no...', 0.31678506731987), ('pushing', 0.3131707012653351), ('triumph,', 0.3116190731525421), ('no', 0.29974159598350525), ('them?"', 0.2927379012107849), ('first.', 0.29270970821380615)]
[('go?', 0.45812922716140747), ('magical', 0.35565727949142456), ('Spells."', 0.3554503619670868), ('Scabbets', 0.34701400995254517), ('cupboard."', 0.33982667326927185), ('dreadlocks', 0.3325180113315582), ('sickening', 0.32789379358291626), ('First,', 0.3245708644390106), ('met', 0.3223033547401428), ('built', 0.3218075931072235)]

1 个答案:

答案 0 :(得分:1)

训练word2Vec模型在某种程度上是随机的。这就是为什么你可能得到不同的结果。此外,Hermione_2开始出现在文本数据的后半部分。在我已经建立Hermione_1上下文时对数据处理过程的理解以及这个单词的向量也是如此,你在完全相同的上下文中引入第二个单词,算法试图找出两者的区别。 其次,你使用一个很短的向量,可能代表概念空间的复杂性。由于简化,您可以获得两个没有任何重叠的向量。