让我们说我有一个带有以下列的Spark DataFrame:
| header1 | location | precision | header2 | velocity | data |
(此df还包含一些数据)
现在,我想将df转换为具有2列的新结构,每列都有复杂的字段-像这样:
| gps | velocity |
| header1 | location | precision | header2 | velocity | data |
如果我只能调用一个方法,那是最好的:
df1 = createStructure(df, "gps", ["header1", "gps", "precision"])
df2 = createStructure(df1, "velocity", ["header2", "velocity", "data"])
我正在尝试“ withColumn”,但没有运气
答案 0 :(得分:1)
尝试一下。
scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._
scala> val df1 = Seq(("h1-4", "loc4", "prec4", "h2-4", "vel4", "d4"), ("h1-5", "loc5", "prec5", "h2-5", "vel5", "d5")).toDF("header1", "location", "precision", "header2", "velocity", "data")
df1: org.apache.spark.sql.DataFrame = [header1: string, location: string ... 4 more fields]
scala> df1.show(false)
+-------+--------+---------+-------+--------+----+
|header1|location|precision|header2|velocity|data|
+-------+--------+---------+-------+--------+----+
|h1-4 |loc4 |prec4 |h2-4 |vel4 |d4 |
|h1-5 |loc5 |prec5 |h2-5 |vel5 |d5 |
+-------+--------+---------+-------+--------+----+
scala> val outputDF = df1.withColumn("gps", struct($"header1", $"location", $"precision")).withColumn("velocity", struct($"header2", $"velocity", $"data")).select("gps", "velocity")
outputDF: org.apache.spark.sql.DataFrame = [gps: struct<header1: string, location: string ... 1 more field>, velocity: struct<header2: string, velocity: string ... 1 more field>]
scala> outputDF.printSchema
root
|-- gps: struct (nullable = false)
| |-- header1: string (nullable = true)
| |-- location: string (nullable = true)
| |-- precision: string (nullable = true)
|-- velocity: struct (nullable = false)
| |-- header2: string (nullable = true)
| |-- velocity: string (nullable = true)
| |-- data: string (nullable = true)
scala> outputDF.show(false)
+-------------------+----------------+
|gps |velocity |
+-------------------+----------------+
|[h1-4, loc4, prec4]|[h2-4, vel4, d4]|
|[h1-5, loc5, prec5]|[h2-5, vel5, d5]|
+-------------------+----------------+