如何从Java中的另一个dataFrame平面映射一个dataFrame?

时间:2019-03-20 05:47:32

标签: java apache-spark dataframe

我有一个如下所示的dataFrame:

+-----------------+--------------------+
|               id|            document|
+-----------------+--------------------+
| doc1            |"word1, word2"      |
| doc2            |"word3 word4"       |
+-----------------+--------------------+

我要创建另一个具有以下结构的dataFrame:

   +-----------------+--------------------+-----------------+
    |               id|            document| word           |
    +-----------------+--------------------+----------------|
    | doc1            |"word1, word2"      | word1          |
    | doc1            |"word1 word2"       | word2          |
    | doc2            |"word3 word4"       | word3          |
    | doc2            |"word3 word4"       | word4          |
    +-----------------+--------------------+----------------|

我尝试了以下操作:

public static Dataset<Row> buildInvertIndex(Dataset<Row> inputRaw, SQLContext sqlContext, String id) {

    JavaRDD<Row> inputInvertedIndex = inputRaw.javaRDD();
    JavaRDD<Tuple3<String, String ,String>> d = inputInvertedIndex.flatMap(x -> {

        List<Tuple3<String, String, String>> k = new ArrayList<>();
        String data2 = x.getString(0).toString();
        String[] field2 = x.getString(1).split(" ", -1);
        for(String s: field2)
            k.add(new Tuple3<String, String, String>(data2, x.getString(1), s));
        return k.iterator();
    }
            );


    JavaPairRDD<String, Tuple2<String, String>>d2 = d.mapToPair(x->{
        return new Tuple2<String, Tuple2<String, String>>(x._3(), new Tuple2<String, String>(x._1(), x._2()));  

    });

    Dataset<Row> d3 = sqlContext.createDataset(JavaPairRDD.toRDD(d2), Encoders.tuple(Encoders.STRING(), Encoders.tuple(Encoders.STRING(),Encoders.STRING()))).toDF();

    return d3;
}

但是它给出了:

+-----------------+----------------------+
|               _1|            _2        |
+-----------------+----------------------+
| word1           |[doc1,"word1, word2"] |
| word2           |[doc1,"word1 word2"]  |
| word3           |[doc2, "word3, word4"]|
| word4           |[doc2, "word3, word4"]|
+-----------------+----------------------+

我是Java的新手。所以,请任何帮助将不胜感激。另外,请假设在上面的第二个数据框中,我想在两列文档和单词上计算一个字符串相似性度量(即jaccard),并将结果添加到新列中,我该怎么做?

1 个答案:

答案 0 :(得分:1)

您可以使用explodesplit

import static org.apache.spark.sql.functions.expr;
inputRaw.withColumn("word", expr("explode(split(document, '[, ]+'))"))