我们有两个数据帧(注意Scala语法用于说明),
val df1 = sc.parallelize(1 to 4).map(i => (i,i*10)).toDF("id","x")
val df2 = sc.parallelize(2 to 4).map(i => (i,i*100)).toDF("id","y")
如何从每个帧中总结一列,以便我们获得这个新的数据帧,
+---+---------+
| id| x_plus_y|
+---+---------+
| 1| 10|
| 2| 220|
| 3| 330|
| 4| 440|
+---+---------+
注意 试过这个,但它使第一行无效,
sqlContext.sql("select df1.id, x+y as x_plus_y from df1 left join df2 on df1.id=df2.id").show
+---+--------+
| id|x_plus_y|
+---+--------+
| 1| null|
| 2| 220|
| 3| 330|
| 4| 440|
+---+--------+
答案 0 :(得分:3)
df3 = df1.join(df2, df1.id == df2.id, "left_outer").select(df1.id, df1.x, df2.y).fillna(0)
df3.select("id", (df3.x + df3.y).alias("x_plus_y")).show()
这适用于Python。
答案 1 :(得分:1)
您甚至不需要使用UDF:
val df3 = df1.as('a).join(df2.as('b), $"a.id" === $"b.id","left").
select(df1("id"),'x,'y,(coalesce('x, lit(0)) + coalesce('y, lit(0))).alias("x_plus_y")).na.fill(0)
df3.show
// df3: org.apache.spark.sql.DataFrame = [id: int, x: int, y: int, x_plus_y: int]
// +---+---+---+--------+
// | id| x| y|x_plus_y|
// +---+---+---+--------+
// | 1| 10| 0| 10|
// | 2| 20|200| 220|
// | 3| 30|300| 330|
// | 4| 40|400| 440|
// +---+---+---+--------+
答案 2 :(得分:0)
在Scala注意到这个解决方案,
val d = sqlContext.sql("""
select df1.id, x, y from df1 left join df2 on df1.id=df2.id""").na.fill(0)
连接帧并用零替换不可用的值,然后定义此UDF,
import org.apache.spark.sql.functions
import org.apache.spark.sql.functions._
val plus: (Int,Int) => Int = (x:Int,y:Int) => x+y
val plus_udf = udf(plus)
d.withColumn("x_plus_y", plus_udf($"x", $"y")).show
+---+---+---+--------+
| id| x| y|x_plus_y|
+---+---+---+--------+
| 1| 10| 0| 10|
| 2| 20|200| 220|
| 3| 30|300| 330|
| 4| 40|400| 440|
+---+---+---+--------+