如何在Spark Scala中将行数据转置/转换为列?

时间:2017-12-28 10:51:14

标签: scala apache-spark apache-spark-sql pivot

我是Spark-SQL的新手。我在Spark Dataframe中有这样的信息

Company X-type Y-type Z-type
A       done    done    done
B       pending done    pending
C       done    done    pending

我希望显示如下

|
`--- public
     |
     `--- favicon.ico

我无法实现这是Spark-SQL

请帮助

1 个答案:

答案 0 :(得分:3)

您可以groupby 公司,然后在类型

列上使用pivot功能

这是一个简单的例子

import org.apache.spark.sql.functions._

val df = spark.sparkContext.parallelize(Seq(
        ("A", "X", "done"),
        ("A", "Y", "done"),
        ("A", "Z", "done"),
        ("C", "X", "done"),
        ("C", "Y", "done"),
        ("B", "Y", "done")
      )).toDF("Company", "Type", "Status")

val result = df.groupBy("Company")
    .pivot("Type")
    .agg(expr("coalesce(first(Status), \"pending\")"))

result.show()

输出:

+-------+-------+----+-------+
|Company|      X|   Y|      Z|
+-------+-------+----+-------+
|      B|pending|done|pending|
|      C|   done|done|pending|
|      A|   done|done|   done|
+-------+-------+----+-------+

您可以稍后重命名该列。

希望这有帮助!