根据条件pyspark使用其他列值覆盖列值

时间:2018-05-17 23:41:29

标签: apache-spark pyspark

我在data frame中有一个pyspark,如下所示。

df.show()

+-----------+------------+-------------+
|customer_id|product_name|      country|
+-----------+------------+-------------+
|   12870946|        null|       Poland|
|     815518|       MA401|United States|
|    3138420|     WG111v2|           UK|
|    3178864|    WGR614v6|United States|
|    7456796|       XE102|United States|
|   21893468|     AGM731F|United States|
+-----------+------------+-------------+

我有另一个数据框,如下所示 df1.show()

+-----------+------------+
|customer_id|product_name|
+-----------+------------+
|   12870946|     GS748TS|
|     815518|       MA402|
|    3138420|        null|
|    3178864|    WGR614v6|
|    7456796|       XE102|
|   21893468|     AGM731F|
|       null|       AE171|
+-----------+------------+

现在我想对这些表格进行fuller outer join并更新product_name列值,如下所示。

1) Overwrite the values in `df` using values in `df1` if there are values in `df1`.
2) if there are `null` values or `no` values in `df1` then leave the values in `df` as they are 

expected result

+-----------+------------+-------------+
|customer_id|product_name|      country|
+-----------+------------+-------------+
|   12870946|     GS748TS|       Poland|
|     815518|       MA402|United States|
|    3138420|     WG111v2|           UK|
|    3178864|    WGR614v6|United States|
|    7456796|       XE102|United States|
|   21893468|     AGM731F|United States|
|       null|       AE171|         null|
+-----------+------------+-------------+

我在下面做了

import pyspark.sql.functions as f
df2 = df.join(df1, df.customer_id == df1.customer_id, 'full_outer').select(df.customer_id, f.coalesce(df.product_name, df1.product_name).alias('product_name'), df.country)

但我得到的结果是不同的

df2.show()

+-----------+------------+-------------+
|customer_id|product_name|      country|
+-----------+------------+-------------+
|   12870946|        null|       Poland|
|     815518|       MA401|United States|
|    3138420|     WG111v2|           UK|
|    3178864|    WGR614v6|United States|
|    7456796|       XE102|United States|
|   21893468|     AGM731F|United States|
|       null|       AE171|         null|
+-----------+------------+-------------+

如何获得expected result

3 个答案:

答案 0 :(得分:3)

您编写的代码为我生成了正确的输出,因此我无法重现您的问题。我看到其他帖子在进行连接时使用别名已经解决了问题所以这里是一个稍微修改过的代码版本,它会做同样的事情:

import pyspark.sql.functions as f

df.alias("r").join(df1.alias("l"), on="customer_id", how='full_outer')\
    .select(
        "customer_id",
        f.coalesce("r.product_name", "l.product_name").alias('product_name'),
        "country"
    )\
    .show()
#+-----------+------------+-------------+
#|customer_id|product_name|      country|
#+-----------+------------+-------------+
#|    7456796|       XE102|United States|
#|    3178864|    WGR614v6|United States|
#|       null|       AE171|         null|
#|     815518|       MA401|United States|
#|    3138420|     WG111v2|           UK|
#|   12870946|     GS748TS|       Poland|
#|   21893468|     AGM731F|United States|
#+-----------+------------+-------------+

当我运行你的代码时也会得到相同的结果(转载如下):

df.join(df1, df.customer_id == df1.customer_id, 'full_outer')\
    .select(
        df.customer_id,
        f.coalesce(df.product_name, df1.product_name).alias('product_name'),
        df.country
    )\
    .show()

我正在使用spark 2.1和python 2.7.13。

答案 1 :(得分:2)

如果值不是字符串null ,则代码非常完美。但是看看df2数据帧,你得到 product_name中的值似乎是字符串null 。您必须使用when 内置函数isnull 内置函数检查 string null

import pyspark.sql.functions as f
df2 = df.join(df1, df.customer_id == df1.customer_id, 'full_outer')\
    .select(df.customer_id, f.when(f.isnull(df.product_name) | (df.product_name == "null"), df1.product_name).otherwise(df.product_name).alias('product_name'), df.country)
df2.show(truncate=False)

应该给你

+-----------+------------+------------+
|customer_id|product_name|country     |
+-----------+------------+------------+
|7456796    |XE102       |UnitedStates|
|3178864    |WGR614v6    |UnitedStates|
|815518     |MA401       |UnitedStates|
|3138420    |WG111v2     |UK          |
|12870946   |GS748TS     |Poland      |
|21893468   |AGM731F     |UnitedStates|
|null       |AE171       |null        |
+-----------+------------+------------+

答案 2 :(得分:0)

由于存在一些冲突的报告 - 首先只是在df1中创建一个新列,其中包含您要使用的df2中的列,假设您的df具有相同的维度,或者如果不是,则根据需要加入它们。然后您可以使用SQL条件句。

from pyspark.sql import functions as F
df1 = df1.withColumn('column', F.when(df1['column'].isNull(), df1['column']).otherwise(df1['other-column-originally-from-df2']) )