PySpark:数据框:数值+空列值导致NULL而不是数值

时间:2019-04-05 19:32:50

标签: pyspark pyspark-sql

我正在从CSV文件加载的PySpark Dataframe中遇到问题,其中我的数字列确实具有空值,如下所示

+-------------+------------+-----------+-----------+
|  Player_Name|Test_Matches|ODI_Matches|T20_Matches|
+-------------+------------+-----------+-----------+
|   Aaron, V R|           9|          9|           |
|  Abid Ali, S|          29|          5|           |
|Adhikari, H R|          21|           |           |
| Agarkar, A B|          26|        191|          4|
+-------------+------------+-----------+-----------+

将这些列铸造为整数,所有空列变为null

df_data_csv_casted = df_data_csv.select(df_data_csv['Country'],df_data_csv['Player_Name'],                                        df_data_csv['Test_Matches'].cast(IntegerType()).alias("Test_Matches"),                                       df_data_csv['ODI_Matches'].cast(IntegerType()).alias("ODI_Matches"),                                         df_data_csv['T20_Matches'].cast(IntegerType()).alias("T20_Matches"))


+-------------+------------+-----------+-----------+
|  Player_Name|Test_Matches|ODI_Matches|T20_Matches|
+-------------+------------+-----------+-----------+
|   Aaron, V R|           9|          9|       null|
|  Abid Ali, S|          29|          5|       null|
|Adhikari, H R|          21|       null|       null|
| Agarkar, A B|          26|        191|          4|
+-------------+------------+-----------+-----------+

然后我求和,但是如果其中之一为null,结果也将为null。如何解决?

df_data_csv_withTotalCol=df_data_csv_casted.withColumn('Total_Matches',(df_data_csv_casted['Test_Matches']+df_data_csv_casted['ODI_Matches']+df_data_csv_casted['T20_Matches']))

+-------------+------------+-----------+-----------+-------------+
|Player_Name  |Test_Matches|ODI_Matches|T20_Matches|Total_Matches|
+-------------+------------+-----------+-----------+-------------+
| Aaron, V R  |           9|          9|       null|         null|
|Abid Ali, S  |          29|          5|       null|         null|
|Adhikari, H R|          21|       null|       null|         null|
|Agarkar, A B |          26|        191|          4|          221|
+-------------+------------+-----------+-----------+-------------+

1 个答案:

答案 0 :(得分:0)

您可以使用coalesce函数来解决此问题。例如,让我们创建一些样本数据

from pyspark.sql.functions import coalesce,lit

cDf = spark.createDataFrame([(None, None), (1, None), (None, 2)], ("a", "b"))
cDf.show()

+----+----+
|   a|   b|
+----+----+
|null|null|
|   1|null|
|null|   2|
+----+----+

当我像您一样简单求和时-

cDf.withColumn('Total',cDf.a+cDf.b).show()

我得到的总计为null,与您描述的相同-

+----+----+-----+

|   a|   b|Total|
+----+----+-----+
|null|null| null|
|   1|null| null|
|null|   2| null|
+----+----+-----+

要解决此问题,请结合使用lites函数和lit函数,该函数将零替换为空值。

cDf.withColumn('Total',coalesce(cDf.a,lit(0)) +coalesce(cDf.b,lit(0))).show()

这给了我正确的结果-

|   a|   b|Total|
+----+----+-----+
|null|null|    0|
|   1|null|    1|
|null|   2|    2|
+----+----+-----+