通过将另一个数据框列转换为列表来过滤Spark数据框

时间:2020-05-18 18:52:10

标签: apache-spark pyspark apache-spark-sql

df = spark.createDataFrame([("1gh","25g","36h"),("2gf","3ku","4we"),("12w","53v","c74"),("1a2","3d4","4c5"),("232","3df","4rt")], ["a","b","c"])


filter_df = spark.createDataFrame([("2gf","3ku"),("12w","53v"), ("12w","53v")], ["a","b"])

我从“ filter_df”中选取了“ a”并创建了一个rdd,然后从以下代码中列出

unique_list = filter_df.select("a").rdd.flatMap(lambda x: x).distinct().collect()

这给了我

unique_list = [u'2gf', u'12w']

尝试使用收集操作将rdd转换为列表。但这给了我下面显示的分配错误

final_df = df.filter(F.col("a").isin(unique_list))

118.255: [GC (Allocation Failure) [PSYoungGen: 1380832K->538097K(1772544K)] 2085158K->1573272K(3994112K), 0.0622847 secs] [Times: user=2.31 sys=1.76, real=0.06 secs]
122.540: [GC (Allocation Failure) [PSYoungGen: 1772529K->542497K(2028544K)] 2807704K->1581484K(4250112K), 0.3217980 secs] [Times: user=11.16 sys=13.15, real=0.33 secs]
127.071: [GC (Allocation Failure) [PSYoungGen: 1776929K->542721K(2411008K)] 2815916K->1582011K(4632576K), 0.8024852 secs] [Times: user=58.43 sys=4.85, real=0.80 secs]
133.284: [GC (Allocation Failure) [PSYoungGen: 2106881K->400752K(2446848K)] 3146171K->1583953K(4668416K), 0.4198589 secs] [Times: user=18.31 sys=12.58, real=0.42 secs]
139.050: [GC (Allocation Failure) [PSYoungGen: 1964912K->10304K(2993152K)] 3148113K->1584408K(5214720K), 0.0712454 secs] [Times: user=2.92 sys=0.88, real=0.08 secs]
146.638: [GC (Allocation Failure) [PSYoungGen: 2188864K->12768K(3036160K)] 3762968K->1588544K(5257728K), 0.1212116 secs] [Times: user=3.05 sys=3.74, real=0.12 secs]
154.153: [GC (Allocation Failure) [PSYoungGen: 2191328K->12128K(3691008K)] 3767104K->1590112K(5912576K), 0.1179030 secs] [Times: user=6.94 sys=0.11, real=0.12 secs

必需的输出:

final_df

+---+---+---+
|  a|  b|  c|
+---+---+---+
|2gf|3ku|4we|
|12w|53v|c74|
+---+---+---+

使用另一个rdd或列表或其他数据帧过滤出spark数据帧的有效方法是什么。以上数据为示例。我可以实时获取更大的数据集

2 个答案:

答案 0 :(得分:0)

您可以使用内部联接:

df.join(filter_df).where(df.a == filter_df.a & df.b == filter_df.b)

答案 1 :(得分:0)

使用 left_semi 加入。

df.join(filter_df, ['a','b'],'left_semi')