PySpark:检查某些列中的值是否在范围内

时间:2019-07-12 08:27:14

标签: python apache-spark dataframe pyspark

假设我具有以下Spark Dataframe:

+---+--------+-----+----+--------+
|c1 |   c2   |  c3 | c4 |   c5   |
+---+--------+-----+----+--------+
|  A|   abc  | 0.1 |null|  0.562 |
|  B|   def  | 0.15| 0.5|  0.123 |
|  A|   ghi  | 0.2 | 0.2|  0.1345|
|  B|   jkl  | null| 0.1|  0.642 |
|  B|   mno  | 0.1 | 0.1|  null  |
+---+--------+-----+----+--------+

如果不是[0, 1],如何检查后三列中的所有值是否都在null范围内?

1 个答案:

答案 0 :(得分:2)

以下应该可以解决问题:

from functools import reduce
import pyspark.sql.functions as F
import warnings

# Filter out valid values
test_df = df.where(reduce(lambda x, y: x | y,  ((F.col(x) > 1) | (F.col(x) < 0) for x in df.columns[2:])))

if not len(test_df.head(1)) == 0:
    test_df.show()
    warnings.warn('Some of the values in the final dataframe were out of range')