答案 0 :(得分:1)
Scala具有dropDuplicates
函数,用于根据您提供的列删除重复项。一个简单的例子:
import org.apache.spark.sql.functions._
val df = Seq (
( 1, 1, 1234, "12010", "null" ),
( 1, 2, 1234, "22201", "null" ),
( 2, 1, 2345, "12011", "null" ),
( 2, 2, 2345, "12011", "null" ),
( 2, 3, 2345, "32011", "yellow" ),
( 2, 4, 2345, "32011", "yellow" ),
( 3, 1, 3456, "4012 ", "null" ),
( 3, 2, 3456, "52012", "green" ),
( 4, 1, 4567, "52012", "green" ),
( 4, 2, 4567, "52013", "null" )
)
.toDF( "identifier1", "identifier2", "groupid", "date", "colour" )
//df.show
// Drop the duplicates based on date and identifier1 columns
df
.dropDuplicates(Seq("date", "identifier1"))
.show
我的结果:
我想说的是,从您的示例中并不能100%清楚地知道需要什么,但是希望可以证明是一个有用的起点。详细了解dropDuplicates
here。