PYSPARK:将一个表列与另一个表的两列之一连接起来

时间:2018-11-20 15:32:26

标签: apache-spark pyspark apache-spark-sql pyspark-sql

我的问题如下:

Table 1
ID1 ID2
 1  2 
 3  4

Table 2
C1    VALUE
 1    London
 4    Texas

Table3 
 C3    VALUE
  2     Paris
  3     Arizona

表1具有主要和次要ID。我需要创建一个最终输出,该输出是基于来自table1的ID映射的Table2和Table3的值的汇总。

即,如果将table2或table3中的值映射到任一ID,则应将其汇总为一个。

i.e my final output should look like:

ID  Aggregated
1  [2, London, Paris] // since Paris is mapped to 2 which is turn is mapped to 1
3  [4, Texas, Arizona] // Texas is mapped to 4 which in turn is mapped to 3

任何建议如何在pyspark中实现这一目标。

我不确定加入表格是否可以解决此问题。

我本以为PairedRDD可能会对此有所帮助,但是我无法提出适当的解决方案。

谢谢

1 个答案:

答案 0 :(得分:0)

下面是一个非常简单的方法:

spark.sql(
"""
  select 1 as id1,2 as id2 
  union
  select 3 as id1,4 as id2 
""").createOrReplaceTempView("table1")

spark.sql(
"""
  select 1 as c1, 'london' as city 
  union
  select 4 as c1, 'texas' as city 
""").createOrReplaceTempView("table2")

spark.sql(
"""
  select 2 as c1, 'paris' as city 
  union
  select 3 as c1, 'arizona' as city 
""").createOrReplaceTempView("table3")

spark.table("table1").show()
spark.table("table2").show()
spark.table("table3").show()

# for simplicity, union table2 and table 3

spark.sql(""" select * from table2 union all select * from table3 """).createOrReplaceTempView("city_mappings")
spark.table("city_mappings").show()

# now join to the ids:

spark.sql("""
  select id1, id2, city from table1
  join city_mappings on c1 = id1 or c1 = id2
""").createOrReplaceTempView("id_to_city")

# and finally you can aggregate: 

spark.sql("""
select id1, id2, collect_list(city)
from id_to_city
group by id1, id2
""").createOrReplaceTempView("result")

table("result").show()

# result looks like this, you can reshape to better suit your needs :
+---+---+------------------+
|id1|id2|collect_list(city)|
+---+---+------------------+
|  1|  2|   [london, paris]|
|  3|  4|  [texas, arizona]|
+---+---+------------------+