我有以下数据框:
field_A | field_B | field_C | field_D
cat | 12 | black | 11
dog | 128 | white | 19
dog | 35 | yellow | 20
dog | 21 | brown | 4
bird | 10 | blue | 7
cow | 99 | brown | 34
是否可以过滤掉field_A中具有重复值的行。也就是说,我希望最终的数据框为:
field_A | field_B | field_C | field_D
cat | 12 | black | 11
bird | 10 | blue | 7
cow | 99 | brown | 34
pyspark有可能吗?谢谢!
答案 0 :(得分:4)
创建数据
rdd = sc.parallelize([(0,1), (0,10), (0,20), (1,2), (2,1), (3,5), (3,18), (4,15), (5,18)])
t = sqlContext.createDataFrame(rdd, ["id", "score"])
t.collect()
[Row(id = 0,score = 1), 行(id = 0,得分= 10), 行(id = 0,得分= 20), 行(id = 1,得分= 2), 行(id = 2,得分= 1), 行(id = 3,得分= 5), 行(id = 3,得分= 18), 行(id = 4,得分= 15), 行(id = 5,得分= 18)]
获取具有给定id
的行的计数idCounts = t.groupBy('id').count()
将idCounts加入原始数据框
out = t.join(idCounts,'id','left_outer').filter('count = 1').select(['id', 'score'])
out.collect
[行(id = 1,得分= 2), 行(id = 2,得分= 1), 行(id = 4,得分= 15), 行(id = 5,得分= 18)]