在Spark中将元组转换为矩阵

时间:2020-08-11 23:02:59

标签: java apache-spark apache-spark-sql

我有一个像这样的元组和值的rdd列表。有成千上万种不同的配对。

(A, B), 1
(B, C), 2
(C, D), 1
(A, D), 1
(D, A), 5

我想将元组值对转换成对应于该对的矩阵。我没想到有任何简单的方法可以做到这一点。

+---+------+------+------+------+
|   |  A   |  B   |  C   |  D   |
+---+------+------+------+------+
| A | -    | 1    | NULL | 1    |
| B | NULL | -    | 2    | NULL |
| C | NULL |      | -    | 1    |
| D | 5    | NULL | NULL | -    |
+---+------+------+------+------+

1 个答案:

答案 0 :(得分:1)

尽力而为,但无法使用spark-sql(您声明)摆脱列名。 只需按照自然顺序进行旋转即可。 尝试一下,添加额外的元组。

    class MyApplication : Application() {

    override fun onCreate() {
        super.onCreate()

        var constraints = with(Constraints.Builder()) {
            setRequiredNetworkType(NetworkType.CONNECTED)
        }.build()

        var request = with(OneTimeWorkRequest.Builder(SyncAndSaveWork::class.java)) {
            setConstraints(constraints)
            addTag("SyncAndSaveWork")
            setInitialDelay(4, TimeUnit.SECONDS)
            setBackoffCriteria(BackoffPolicy.EXPONENTIAL, 1, TimeUnit.MINUTES)
        }.build()

        WorkManager.getInstance().cancelAllWork()
        WorkManager.getInstance().enqueue(request)
    }
}

返回:

import org.apache.spark.sql.functions._ 
// Note sure what difference is between ("A", "B"), 1 or "A", "B", 1
val rdd = sc.parallelize(Seq(  (("A", "B"), 1), (("B", "C"), 2), (("C", "D"), 1), (("A", "D"), 1), (("D", "A"), 5), (("E", "Z"), 500) ))

// Can start from here in fact
val rdd2 = rdd.map(x => (x._1._1, x._1._2, x._2))

val df = rdd2.toDF()

// Natural ordering, but cannot get rid of _1 column in a DF (spark sql)
df.groupBy("_1").pivot("_2").agg(first("_3"))
  .orderBy("_1")
  .show(false)
相关问题