Python Spark连接两个数据框并填充列

时间:2020-06-30 14:38:15

标签: apache-spark join pyspark

我有两个需要以某种特殊方式连接的数据框。

数据框1:

+--------------------+---------+----------------+
|        asset_domain|      eid|             oid|
+--------------------+---------+----------------+
|      test-domain...|   126656|          126656|
|    nebraska.aaa.com|   335660|          335660|
|         netflix.com|      460|             460|
+--------------------+---------+----------------+

数据框2:

+--------------------+--------------------+---------+--------------+----+----+------------+
|               asset|        asset_domain|dns_count|            ip|  ev|post|form_present|
+--------------------+--------------------+---------+--------------+----+----+------------+
| sub1.test-domain...|      test-domain...|     6354| 11.11.111.111|   1|   1|        null|
|         netflix.com|         netflix.com|     3836| 22.22.222.222|null|null|        null|
+--------------------+--------------------+---------+--------------+----+----+------------+

所需结果:

+--------------------+---------+-------------+----+----+------------+---------+----------------+
|               asset|dns_count|           ip|  ev|post|form_present|      eid|             oid|
+--------------------+---------+-------------+----+----+------------+---------+----------------+
|         netflix.com|     3836|22.22.222.222|null|null|        null|      460|             460|
| sub1.test-domain...|     5924|111.11.111.11|   1|   1|        null|   126656|          126656|
|    nebraska.aaa.com|     null|         null|null|null|        null|   335660|          335660|
+--------------------+---------+-------------+----+----+------------+---------+----------------+

基本上-它应该在asset_domain上连接df1和df2,但是如果df2中不存在df1和df2,那么生成的asset应该是df1的asset_domain

我尝试了df = df2.join(df1, ["asset_domain"], "right").drop("asset_domain"),但是显然null留在了asset的{​​{1}}列中,因为它在df2中没有匹配的域。对于这种特殊情况,我该如何将它们添加到nebraska.aaa.com列中?

2 个答案:

答案 0 :(得分:4)

您可以在加入后使用 coalesce 功能创建资产列。

df2.join(df1, ["asset_domain"], "right").select(coalesce("asset","asset_domain").alias("asset"),"dns_count","ip","ev","post","form_present","eid","oid").orderBy("asset").show()
#+----------------+---------+-------------+----+----+------------+------+------+
#|           asset|dns_count|           ip|  ev|post|form_present|   eid|   oid|
#+----------------+---------+-------------+----+----+------------+------+------+
#|nebraska.aaa.com|     null|         null|null|null|        null|335660|335660|
#|     netflix.com|     3836|22.22.222.222|null|null|        None|   460|   460|
#|sub1.test-domain|     6354|11.11.111.111|   1|   1|        null|126656|126656|
#+----------------+---------+-------------+----+----+------------+------+------+

答案 1 :(得分:0)

加入后,您可以使用isNull()函数

import pyspark.sql.functions as F
tst1 = sqlContext.createDataFrame([('netflix',1),('amazon',2)],schema=("asset_domain",'xtra1'))
tst2= sqlContext.createDataFrame([('netflix','yahoo',1),('amazon','yahoo',2),('flipkart',None,2)],schema=("asset_domain","asset",'xtra'))
tst_j = tst1.join(tst2,on='asset_domain',how='right')
#%%
tst_res = tst_j.withColumn("asset",F.when(F.col('asset').isNull(),F.col('asset_domain')).otherwise(F.col('asset')))