第一个Df是:
ID Name ID2 Marks
1 12 1 333
第二个Df2是:
ID Name ID2 Marks
1 3 989
7 98 8 878
我需要的输出是:
ID Name ID2 Marks
1 12 1 333
1 3 989
7 98 8 878
请帮忙!
答案 0 :(得分:0)
使用union
或unionAll
功能:
df1.unionAll(df2)
df1.union(df2)
例如:
scala> val a = (1,"12",1,333)
a: (Int, String, Int, Int) = (1,12,1,333)
scala> val b = (1,"",3,989)
b: (Int, String, Int, Int) = (1,"",3,989)
scala> val c = (7,"98",8,878)
c: (Int, String, Int, Int) = (7,98,8,878)
scala> import spark.implicits._
import spark.implicits._
scala> val df1 = List(a).toDF("ID","Name","ID2","Marks")
df1: org.apache.spark.sql.DataFrame = [ID: int, Name: string ... 2 more fields]
scala> val df2 = List(b, c).toDF("ID","Name","ID2","Marks")
df2: org.apache.spark.sql.DataFrame = [ID: int, Name: string ... 2 more fields]
scala> df1.show
+---+----+---+-----+
| ID|Name|ID2|Marks|
+---+----+---+-----+
| 1| 12| 1| 333|
+---+----+---+-----+
scala> df2.show
+---+----+---+-----+
| ID|Name|ID2|Marks|
+---+----+---+-----+
| 1| | 3| 989|
| 7| 98| 8| 878|
+---+----+---+-----+
scala> df1.union(df2).show
+---+----+---+-----+
| ID|Name|ID2|Marks|
+---+----+---+-----+
| 1| 12| 1| 333|
| 1| | 3| 989|
| 7| 98| 8| 878|
+---+----+---+-----+
答案 1 :(得分:0)
一个简单的union
或unionAll
应该为你做的伎俩
Df.union(Df2)
或
Df.unionAll(Df2)
如api文件中所述
返回包含此数据集中的行和另一个数据集的新数据集 这相当于SQL中的
UNION ALL
。做一个SQL风格的集合联盟(确实如此) 元素的重复数据删除),使用此函数后跟[[distinct]] 此外,作为SQL的标准,此函数按位置(而非名称)解析列。