我有以下数据:
MyStruct
我想在每个组中找到唯一性出现次数。到目前为止,我已经做到了
group_id id name
---- -- ----
G1 1 apple
G1 2 orange
G1 3 apple
G1 4 banana
G1 5 apple
G2 6 orange
G2 7 apple
G2 8 apple
我想要这样的结果:
val group = Window.partitionBy("group_id")
newdf.withColumn("name_appeared_count", approx_count_distinct($"name").over(group))
谢谢!
答案 0 :(得分:2)
Method approx_count_distinct($"name").over(group)
counts distinct name
per group, hence isn't what you want based on your expected output. Using count("name")
over partition("group_id", "name")
will produce the wanted counts:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val df = Seq(
("G1", 1, "apple"),
("G1", 2, "orange"),
("G1", 3, "apple"),
("G1", 4, "banana"),
("G1", 5, "apple"),
("G2", 6, "orange"),
("G2", 7, "apple"),
("G2", 8, "apple")
).toDF("group_id", "id", "name")
val group = Window.partitionBy("group_id", "name")
df.
withColumn("name_appeared_count", count("name").over(group)).
orderBy("id").
show
// +--------+---+------+-------------------+
// |group_id| id| name|name_appeared_count|
// +--------+---+------+-------------------+
// | G1| 1| apple| 3|
// | G1| 2|orange| 1|
// | G1| 3| apple| 3|
// | G1| 4|banana| 1|
// | G1| 5| apple| 3|
// | G2| 6|orange| 1|
// | G2| 7| apple| 2|
// | G2| 8| apple| 2|
// +--------+---+------+-------------------+