Is it possible to load word2vec pre-trained available vectors into spark?

时间:2017-08-04 12:11:16

标签: scala apache-spark stanford-nlp word2vec

Is there a way to load Google's or Glove's pre-trained vectors (models) such as GoogleNews-vectors-negative300.bin.gz into spark and performing operations such as findSynonyms that are provided from spark? or do I need to do the loading and operations from scratch?

In this post Load Word2Vec model in Spark , Tom Lous suggests converting the bin file to txt and starting from there, I already did that .. but then what is next?

In a question I posted yesterday I got an answer that models in Parquet format can be loaded in spark, thus I'm posting this question to be sure that there is no other option.

1 个答案:

答案 0 :(得分:1)

免责声明:我是新手,但以下内容至少对我有用。

诀窍是弄清楚如何从一组单词向量构造Word2VecModel以及处理尝试以这种方式创建模型时遇到的一些陷阱。

首先,将单词向量加载到Map中。例如,我已经将单词向量保存为实木复合地板格式(在名为“ wordvectors.parquet”的文件夹中),其中“ term”列包含String单词,而“ vector”列将矢量保存为array [float],我可以像这样在Java中加载它:

// Loads the dataset with the "term" column holding the word and the "vector" column 
// holding the vector as an array[float] 
Dataset<Row> vectorModel = pSpark.read().parquet("wordvectors.parquet");

//convert dataset to a map.
Map<String, List<Float>> vectorMap = Arrays.stream((Row[])vectorModel.collect())
            .collect(Collectors.toMap(row -> row.getAs("term"), row -> row.getList(1)));

//convert to the format that the word2vec model expects float[] rather than List<Float>
Map<String, float[]> word2vecMap = vectorMap.entrySet().stream()
                .collect(Collectors.toMap(Map.Entry::getKey, entry -> (float[]) Floats.toArray(entry.getValue())));

//need to convert to scala immutable map because that's what word2vec needs
scala.collection.immutable.Map<String, float[]> scalaMap = toScalaImmutableMap(word2vecMap);

private static <K, V> scala.collection.immutable.Map<K, V> toScalaImmutableMap(Map<K, V> pFromMap) {
        final List<Tuple2<K,V>> list = pFromMap.entrySet().stream()
                .map(e -> Tuple2.apply(e.getKey(), e.getValue()))
                .collect(Collectors.toList());

        Seq<Tuple2<K,V>> scalaSeq = JavaConverters.asScalaBufferConverter(list).asScala().toSeq();

        return (scala.collection.immutable.Map<K, V>) scala.collection.immutable.Map$.MODULE$.apply(scalaSeq);
    }

现在您可以从头开始构建模型。由于对Word2VecModel的工作方式有一个怪异,因此必须手动设置矢量大小,并以一种奇怪的方式进行设置。否则,默认值为100,并且在尝试调用.transform()时收到错误消息。这是我发现有效的一种方法,不确定是否所有必要:

 //not used for fitting, only used for setting vector size param (not sure if this is needed or if result.set is enough
Word2Vec parent = new Word2Vec();
parent.setVectorSize(300);

Word2VecModel result = new Word2VecModel("w2vmodel", new org.apache.spark.mllib.feature.Word2VecModel(scalaMap)).setParent(parent);
        result.set(result.vectorSize(), 300);

现在,您应该能够像使用自训练模型一样使用result.transform()。

我没有测试其他Word2VecModel函数是否可以正常工作,我只测试了.transform()。