我正在使用带有数据帧的Spark SQL。我有一个输入数据帧,我想将其行追加(或插入)到具有更多列的更大的数据帧。我该怎么办?
如果这是SQL,我会使用INSERT INTO OUTPUT SELECT ... FROM INPUT
,但我不知道如何使用Spark SQL。
具体:
var input = sqlContext.createDataFrame(Seq(
(10L, "Joe Doe", 34),
(11L, "Jane Doe", 31),
(12L, "Alice Jones", 25)
)).toDF("id", "name", "age")
var output = sqlContext.createDataFrame(Seq(
(0L, "Jack Smith", 41, "yes", 1459204800L),
(1L, "Jane Jones", 22, "no", 1459294200L),
(2L, "Alice Smith", 31, "", 1459595700L)
)).toDF("id", "name", "age", "init", "ts")
scala> input.show()
+---+-----------+---+
| id| name|age|
+---+-----------+---+
| 10| Joe Doe| 34|
| 11| Jane Doe| 31|
| 12|Alice Jones| 25|
+---+-----------+---+
scala> input.printSchema()
root
|-- id: long (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
scala> output.show()
+---+-----------+---+----+----------+
| id| name|age|init| ts|
+---+-----------+---+----+----------+
| 0| Jack Smith| 41| yes|1459204800|
| 1| Jane Jones| 22| no|1459294200|
| 2|Alice Smith| 31| |1459595700|
+---+-----------+---+----+----------+
scala> output.printSchema()
root
|-- id: long (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
|-- init: string (nullable = true)
|-- ts: long (nullable = false)
我想将input
的所有行追加到output
的末尾。同时,我想将output
的{{1}}列设置为空字符串init
,将''
列设置为当前时间戳,例如1461883875L。
任何帮助都将不胜感激。
答案 0 :(得分:17)
Spark DataFrames
是不可变的,因此无法追加/插入行。相反,您只需添加缺少的列并使用UNION ALL
:
output.unionAll(input.select($"*", lit(""), current_timestamp.cast("long")))
答案 1 :(得分:1)
我遇到了与您的SQL-Question匹配的类似问题:
我想将数据框附加到现有的配置单元表中,该表也更大(更多列)。保留您的示例:output
是我现有的表格,input
可以是数据框架。我的解决方案只使用SQL,为了完整起见,我想提供它:
import org.apache.spark.sql.SaveMode
var input = spark.createDataFrame(Seq(
(10L, "Joe Doe", 34),
(11L, "Jane Doe", 31),
(12L, "Alice Jones", 25)
)).toDF("id", "name", "age")
//--> just for a running example: In my case the table already exists
var output = spark.createDataFrame(Seq(
(0L, "Jack Smith", 41, "yes", 1459204800L),
(1L, "Jane Jones", 22, "no", 1459294200L),
(2L, "Alice Smith", 31, "", 1459595700L)
)).toDF("id", "name", "age", "init", "ts")
output.write.mode(SaveMode.Overwrite).saveAsTable("appendTest");
//<--
input.createOrReplaceTempView("inputTable");
spark.sql("INSERT INTO TABLE appendTest SELECT id, name, age, null, null FROM inputTable");
val df = spark.sql("SELECT * FROM appendTest")
df.show()
输出:
+---+-----------+---+----+----------+
| id| name|age|init| ts|
+---+-----------+---+----+----------+
| 0| Jack Smith| 41| yes|1459204800|
| 1| Jane Jones| 22| no|1459294200|
| 2|Alice Smith| 31| |1459595700|
| 12|Alice Jones| 25|null| null|
| 11| Jane Doe| 31|null| null|
| 10| Joe Doe| 34|null| null|
+---+-----------+---+----+----------+
如果您遇到问题,您不知道缺少多少字段,可以使用diff
之类的
val missingFields = output.schema.toSet.diff(input.schema.toSet)
然后(在坏的伪代码中)
val sqlQuery = "INSERT INTO TABLE appendTest SELECT " + commaSeparatedColumnNames + commaSeparatedNullsForEachMissingField + " FROM inputTable"
希望能帮助人们解决未来的问题!
P.S。:在您的特殊情况下(当前时间戳+ init的空字段)您甚至可以使用
spark.sql("INSERT INTO TABLE appendTest SELECT id, name, age, '' as init, current_timestamp as ts FROM inputTable");
导致
+---+-----------+---+----+----------+
| id| name|age|init| ts|
+---+-----------+---+----+----------+
| 0| Jack Smith| 41| yes|1459204800|
| 1| Jane Jones| 22| no|1459294200|
| 2|Alice Smith| 31| |1459595700|
| 12|Alice Jones| 25| |1521128513|
| 11| Jane Doe| 31| |1521128513|
| 10| Joe Doe| 34| |1521128513|
+---+-----------+---+----+----------+