合并Spark中的数据帧

时间:2016-08-01 19:28:42

标签: scala apache-spark apache-spark-sql spark-dataframe

我有2个数据帧,比如A& B.我想加入他们的关键专栏和创建另一个Dataframe。当密钥在A&中匹配时B,我需要从B获取行,而不是从A。

例如:

DataFrame A:

Employee1, salary100
Employee2, salary50
Employee3, salary200

DataFrame B

Employee1, salary150
Employee2, salary100
Employee4, salary300

生成的DataFrame应为:

DataFrame C:

Employee1, salary150
Employee2, salary100
Employee3, salary200
Employee4, salary300

我怎样才能在Spark& Scala呢?

2 个答案:

答案 0 :(得分:1)

尝试:

dfA.registerTempTable("dfA")
dfB.registerTempTable("dfB")

sqlContext.sql("""
SELECT coalesce(dfA.employee, dfB.employee), 
       coalesce(dfB.salary, dfA.salary) FROM dfA FULL OUTER JOIN dfB
ON dfA.employee = dfB.employee""")

sqlContext.sql("""
SELECT coalesce(dfA.employee, dfB.employee),
  CASE dfB.employee IS NOT NULL THEN dfB.salary
  CASE dfB.employee IS NOT NULL THEN dfA.salary
  END FROM dfA FULL OUTER JOIN dfB
ON dfA.employee = dfB.employee""")

答案 1 :(得分:1)

假设dfA和dfB有2列emp和sal。您可以使用以下内容:

import org.apache.spark.sql.{functions => f}

val dfB1 = dfB
  .withColumnRenamed("sal", "salB")
  .withColumnRenamed("emp", "empB")

val joined = dfA
  .join(dfB1, 'emp === 'empB, "outer")
  .select(
    f.coalesce('empB, 'emp).as("emp"),
    f.coalesce('salB, 'sal).as("sal")
  )

注意:对于给定值emp

,每个Dataframe应该只有一行