如何从pyspark word2vec模型中获取单词列表?

时间:2017-07-27 09:18:00

标签: apache-spark nlp pyspark apache-spark-mllib word2vec

我正在尝试使用PySpark生成单词向量。使用gensim我可以看到单词和最接近的单词如下:

sentences = open(os.getcwd() + "/tweets.txt").read().splitlines()
w2v_input=[]
for i in sentences:
    tokenised=i.split()
    w2v_input.append(tokenised)
model = word2vec.Word2Vec(w2v_input)
for key in model.wv.vocab.keys():
    print key
    print model.most_similar(positive=[key])

使用PySpark

inp = sc.textFile("tweet.txt").map(lambda row: row.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(inp)

如何从模型中的向量空间生成单词?那是与gensim model.wv.vocab.keys()相等的pyspark?

背景:我需要将模型中的单词和同义词存储在地图中,以便稍后我可以使用它们来查找推文的情绪。我不能在pyspark中的map函数中重用word-vector模型,因为模型属于spark上下文(下面粘贴的错误)。我想要pyspark word2vec版本而不是gensim,因为它为某些测试单词提供了更好的同义词。

 Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation.SparkContext can only be used on the driver, not in code that it run on workers.

也欢迎任何替代解决方案。

2 个答案:

答案 0 :(得分:6)

Spark中的等效命令是model.getVectors(),它再次返回一个字典。这是一个快速的玩具示例,只有3个单词(alpha, beta, charlie),改编自documentation

sc.version
# u'2.1.1'

from pyspark.mllib.feature import Word2Vec
sentence = "alpha beta " * 100 + "alpha charlie " * 10
localDoc = [sentence, sentence]
doc = sc.parallelize(localDoc).map(lambda line: line.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(doc)

model.getVectors().keys()
#  [u'alpha', u'beta', u'charlie']

关于查找同义词,您可能会发现another answer of mine很有用。

关于您提到的错误和可能的解决方法,请查看我的this answer

答案 1 :(得分:0)

并按照建议here, 如果要在文档中包含所有单词,请相应地设置MinCount参数(默认值为5):

async function loadImage() {
    const img = new Image();

    return new Promise((resolve, reject) => {
      img.addEventListener("load", () => {
        resolve();
      });

      img.addEventListener("error", () => {
        reject();
      });

      img.src = 'assets/myImg.jpg';
    });
}

async function doSomething() {
  try {
    await loadImage();
    console.log("image loaded");
  } catch(e) {
    console.error("image did not load");
  }
}