假设我具有以下Spark Dataframe:
+---+--------+-----+----+--------+
|c1 | c2 | c3 | c4 | c5 |
+---+--------+-----+----+--------+
| A| abc | 0.1 |null| 0.562 |
| B| def | 0.15| 0.5| 0.123 |
| A| ghi | 0.2 | 0.2| 0.1345|
| B| jkl | null| 0.1| 0.642 |
| B| mno | 0.1 | 0.1| null |
+---+--------+-----+----+--------+
如果不是[0, 1]
,如何检查后三列中的所有值是否都在null
范围内?
答案 0 :(得分:2)
以下应该可以解决问题:
from functools import reduce
import pyspark.sql.functions as F
import warnings
# Filter out valid values
test_df = df.where(reduce(lambda x, y: x | y, ((F.col(x) > 1) | (F.col(x) < 0) for x in df.columns[2:])))
if not len(test_df.head(1)) == 0:
test_df.show()
warnings.warn('Some of the values in the final dataframe were out of range')