为什么Spark的Word2Vec返回向量?

时间:2018-11-13 02:08:17

标签: java apache-spark machine-learning word2vec apache-spark-ml

运行Spark's example for Word2Vec时,我意识到它接受了一个字符串数组并给出了一个向量。我的问题是,它不应该返回矩阵而不是向量吗?我期望每个输入单词一个向量。但是它返回一个向量周期!

或者它应该接受字符串,而不是字符串数组(一个单词)作为输入。然后,可以,它可以返回一个向量作为输出。但是接受一个字符串数组并返回一个向量对我来说毫无意义。

[更新]

根据@Shaido的请求,以下是我的微小更改以打印输出模式的代码:

public class JavaWord2VecExample {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("JavaWord2VecExample")
                .getOrCreate();

        // $example on$
        // Input data: Each row is a bag of words from a sentence or document.
        List<Row> data = Arrays.asList(
                RowFactory.create(Arrays.asList("Hi I heard about Spark".split(" "))),
                RowFactory.create(Arrays.asList("I wish Java could use case classes".split(" "))),
                RowFactory.create(Arrays.asList("Logistic regression models are neat".split(" ")))
        );
        StructType schema = new StructType(new StructField[]{
                new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty())
        });
        Dataset<Row> documentDF = spark.createDataFrame(data, schema);

        // Learn a mapping from words to Vectors.
        Word2Vec word2Vec = new Word2Vec()
                .setInputCol("text")
                .setOutputCol("result")
                .setVectorSize(7)
                .setMinCount(0);

        Word2VecModel model = word2Vec.fit(documentDF);
        Dataset<Row> result = model.transform(documentDF);

        for (Row row : result.collectAsList()) {
            List<String> text = row.getList(0);
            System.out.println("Schema: " + row.schema());
            Vector vector = (Vector) row.get(1);
            System.out.println("Text: " + text + " => \nVector: " + vector + "\n");
        }
        // $example off$

        spark.stop();
    }
}

它会打印:

Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true))
Text: [Hi, I, heard, about, Spark] => 
Vector: [-0.0033279924420639875,-0.0024428479373455048,0.01406305879354477,0.030621735751628878,0.00792500376701355,0.02839711122214794,-0.02286271695047617]

Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true))
Text: [I, wish, Java, could, use, case, classes] => 
Vector: [-9.96453288410391E-4,-0.013741840076233658,0.013064394239336252,-0.01155538750546319,-0.010510949650779366,0.004538436819400106,-0.0036846946126648356]

Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true))
Text: [Logistic, regression, models, are, neat] => 
Vector: [0.012510885251685977,-0.014472834207117558,0.002779599279165268,0.0022389178164303304,0.012743516173213721,-0.02409198731184006,0.017409833287820222]

如果我错了,请纠正我,但是输入是字符串数组,而输出是单个向量。我期望每个单词都可以映射到一个向量中。

2 个答案:

答案 0 :(得分:2)

这是在此处证明Spark原理合理性的尝试,应将其作为对已经提供作为答案的漂亮的 programming 解释的补充...

首先,应该如何精确地组合单个单词嵌入,原则上不是Word2Vec模型本身的功能(这大概是单个单词),而是一个需要关注的问题“高阶”模型,例如Sentence2Vec,Paragraph2Vec,Doc2VecWikipedia2Vec等(我想,您可以命名更多……)。

已经说过,事实确实是,将单词向量组合起来以获得较大文本片段(短语,句子,tweet等)的向量表示的第一种方法确实是对组成部分的向量表示求平均就像Spark ML一样。

从医生界开始,我们有:

How to concatenate word vectors to form sentence vector(SO答案):

  

至少有三种常见的方式来组合嵌入向量; (一种)   求和,(b)求和平均,或(c)合并。 [...]参见gensim.models.doc2vec.Doc2Vecdm_concat和   dm_mean-它允许您使用这三个选项中的任何一个

Sentence2Vec : Evaluation of popular theories — Part I (Simple average of word vectors)(博客文章):

  

那么,当您说出话来时,首先想到的是什么   向量,需要计算句子向量。

     

只求平均吗?

     

是的,这就是我们在这里要做的。   enter image description here

Sentence2Vec(Github存储库):

  

Word2Vec可以帮助查找具有相似语义的其他单词。   但是,Word2Vec每次只能使用一个单词,而一个句子   由多个单词组成。为了解决这个问题,我编写了Sentence2Vec,   实际上是Word2Vec的包装。获得一个向量   句子,我只是得到了每个单词的平均向量和   句子。

显然,至少对于从业者而言,单个单词向量的这种简单平均绝非意外。

这里预期的反驳是,博客帖子和SO答案可能不是 可靠来源; 研究者和相关的科学文献如何?好吧,事实证明,这种简单的平均在这里也很常见:

摘自Distributed Representations of Sentences and Documents(Le&Mikolov,Google,ICML 2014):

enter image description here

来自NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter Sentiment analysis(SemEval 2017,第2.1.2节):

enter image description here


到现在应该很清楚,Spark ML中的特定设计选择绝不是任意的,甚至是不常见的;我已经写了一篇博客,讲述了Spark ML中看来是荒谬的设计选择(请参阅Classification in Spark 2.0: “Input validation failed” and other wondrous tales),但是看来情况并非如此……

答案 1 :(得分:1)

要查看与每个单词对应的向量,可以运行model.getVectors。对于问题中的数据帧(向量大小为3而不是7),得出:

+----------+-----------------------------------------------------------------+
|word      |vector                                                           |
+----------+-----------------------------------------------------------------+
|heard     |[0.14950960874557495,-0.11237259954214096,-0.03993036597967148]  |
|are       |[-0.16390761733055115,-0.14509087800979614,0.11349033564329147]  |
|neat      |[0.13949351012706757,0.08127426356077194,0.15970033407211304]    |
|classes   |[0.03703496977686882,0.05841822177171707,-0.02267565205693245]   |
|I         |[-0.018915412947535515,-0.13099457323551178,0.14300788938999176] |
|regression|[0.1529865264892578,0.060659825801849365,0.07735282927751541]    |
|Logistic  |[-0.12702016532421112,0.09839040040969849,-0.10370948910713196]  |
|Spark     |[-0.053579315543174744,0.14673036336898804,-0.002033260650932789]|
|could     |[0.12216471135616302,-0.031169598922133446,-0.1427609771490097]  |
|use       |[0.08246973901987076,0.002503493567928672,-0.0796264186501503]   |
|Hi        |[0.16548289358615875,0.06477408856153488,0.09229831397533417]    |
|models    |[-0.05683165416121483,0.009706663899123669,-0.033789146691560745]|
|case      |[0.11626788973808289,0.10363516956567764,-0.07028932124376297]   |
|about     |[-0.1500445008277893,-0.049380943179130554,0.03307584300637245]  |
|Java      |[-0.04074851796030998,0.02809843420982361,-0.16281810402870178]  |
|wish      |[0.11882393807172775,0.13347993791103363,0.14399205148220062]    |
+----------+-----------------------------------------------------------------+

因此每个单词确实都有自己的表示形式。但是,当您向模型输入一个句子(字符串数组)时,会发生的事情是将句子中单词的所有向量都求平均。

来自github implementation

/**
  * Transform a sentence column to a vector column to represent the whole sentence. The transform
  * is performed by averaging all word vectors it contains.
  */
 @Since("2.0.0")
 override def transform(dataset: Dataset[_]): DataFrame = {
 ...

这很容易确认,例如:

Text: [Logistic, regression, models, are, neat] => 
Vector: [-0.011055880039930344,0.020988055132329465,0.042608972638845444]

第一个元素是通过取五个涉及单词的向量的第一个元素的平均值来计算的,

(-0.12702016532421112 + 0.1529865264892578 -0.05683165416121483 -0.16390761733055115 + 0.13949351012706757) / 5

等于-0.011055880039930344