Spark加入

时间:2018-02-14 12:58:25

标签: apache-spark dataframe apache-spark-sql

我有超过2个表格,我希望加入它们并创建一个表格,以便查询更快。

表1

---------------
user  | activityId
---------------
user1 | 123
user2 | 123
user3 | 123
user4 | 123
user5 | 123
---------------

表2

---------------------------------
user  | activityId | event-1-time
---------------------------------
user2 | 123        | 1001
user2 | 123        | 1002
user3 | 123        | 1003
user5 | 123        | 1004
---------------------------------

表-3

---------------------------------
user  | activityId | event-2-time
---------------------------------
user2 | 123        | 10001
user5 | 123        | 10002
---------------------------------

使用table-2&在桌面1上的左连接(user,activityId) table-3将产生如下结果:

加入数据

--------------------------------------------------------------------
user  | activityId | event-1 | event-1-time | event-2 | event-2-time
--------------------------------------------------------------------
user1 | 123        | 0       | null         | 0       | null
user2 | 123        | 1       | 1001         | 1       | 10001
user2 | 123        | 1       | 1002         | 1       | 10001
user3 | 123        | 1       | 1003         | 0       | null
user4 | 123        | 0       | null         | 0       | null
user5 | 123        | 1       | 1004         | 1       | 10002
--------------------------------------------------------------------

我希望同时删除事件-2引入的冗余,即事件-2只出现一次但报告两次,因为事件-1出现两次。

换句话说,用户和activityId分组记录在每个表级都应该是不同的。

我想要关注输出。我不关心关系(事件-1与事件-2)。是否有任何允许自定义连接并实现此行为

user  | activityId | event-1 | event-1-time | event-2 | event-2-time
--------------------------------------------------------------------
user1 | 123        | 0       | null         | 0       | null
user2 | 123        | 1       | 1001         | 1       | 10001
user2 | 123        | 1       | 1002         | 0       | null
user3 | 123        | 1       | 1003         | 0       | null
user4 | 123        | 0       | null         | 0       | null
user5 | 123        | 1       | 1004         | 1       | 10002
--------------------------------------------------------------------

修改

我正在使用Scala加入这些表。使用的查询:

val joined = table1.join(table2, Seq("user","activityId"), "left").join(table3, Seq("user","activityId"), "left")

joined.select(table1("user"), table1("activityId"), when(table2("activityId").isNull,0).otherwise(1) as "event-1", 
table2("timestamp") as "event-1-time"), when(table3("activityId").isNull, 0).otherwise(1) as "event-2", table3("timestamp") as "event-2-time").show

2 个答案:

答案 0 :(得分:1)

您应该为{em>每个用户 排序的行索引创建一个附加列 {{1然后在activityId进程

中使用该添加的列
outer join

您应该将所需的输出import org.apache.spark.sql.expressions._ def windowSpec = Window.partitionBy("user").orderBy("activityId") import org.apache.spark.sql.functions._ val tempTable1 = table1.withColumn("rowNumber", row_number().over(windowSpec)) val tempTable2 = table2.withColumn("rowNumber", row_number().over(windowSpec)).withColumn("event-1", lit(1)) val tempTable3 = table3.withColumn("rowNumber", row_number().over(windowSpec)).withColumn("event-2", lit(1)) tempTable1 .join(tempTable2, Seq("user", "activityId", "rowNumber"), "outer") .join(tempTable3, Seq("user", "activityId", "rowNumber"), "outer") .drop("rowNumber") .na.fill(0) 设为

dataframe

答案 1 :(得分:0)

以下是要求的代码实现

from pyspark.sql import Row
ll = [('test',123),('test',123),('test',123),('test',123)]
rdd = sc.parallelize(ll)
test1 = rdd.map(lambda x: Row(user=x[0], activityid=int(x[1])))
test1_df = sqlContext.createDataFrame(test1)

mm = [('test',123,1001),('test',123,1002),('test',123,1003),('test',123,1004)]
rdd1 = sc.parallelize(mm)
test2 = rdd1.map(lambda x: Row(user=x[0], 
activityid=int(x[1]),event_time_1=int(x[2])))
test2_df = sqlContext.createDataFrame(test2)

nn = [('test',123,10001),('test',123,10002)]
rdd2 = sc.parallelize(nn)
test3 = rdd2.map(lambda x: Row(user=x[0], 
activityid=int(x[1]),event_time_2=int(x[2])))
test3_df = sqlContext.createDataFrame(test3)

from pyspark.sql.window import Window
import pyspark.sql.functions as func
from pyspark.sql.functions import dense_rank, rank

n = Window.partitionBy(test2_df.user,test2_df.activityid).orderBy(test2_df.event_time_1)
int2_df = test2_df.select("user","activityid","event_time_1",rank().over(n).alias("col_rank")).filter('col_rank = 1')

o = Window.partitionBy(test3_df.user,test3_df.activityid).orderBy(test3_df.event_time_2)
int3_df = test3_df.select("user","activityid","event_time_2",rank().over(o).alias("col_rank")).filter('col_rank = 1')

test1_df.distinct().join(int2_df,["user","activityid"],"leftouter").join(int3_df,["user","activityid"],"leftouter").show(10)

+----+----------+------------+--------+------------+--------+
|user|activityid|event_time_1|col_rank|event_time_2|col_rank|
+----+----------+------------+--------+------------+--------+
|test|       123|        1001|       1|       10001|       1|
+----+----------+------------+--------+------------+--------+