使用每月范围的Pyspark Joing

时间:2019-05-22 17:37:51

标签: pyspark

Hive中有三个表A,B 表具有以下列,并根据天进行了分区。我们需要提取2016年1月1日至2016年12月31日之间的数据。我刚刚提到了样本,但这些记录在1年中以百万计。

ID Day Name Description
1   2016-09-01  Sam   Retail
2   2016-01-28  Chris Retail
3   2016-02-06  ChrisTY Retail
4   2016-02-26  Christa Retail
3   2016-12-06  ChrisTu Retail
4   2016-12-31  Christi Retail

表B

ID SkEY
1  1.1
2  1.2
3  1.3

表C

Start_Date  End_Date Month_No
2016-01-01 2016-01-31 1
2016-02-01 2016-02-28 2
2016-03-01 2016-03-31 3
2016-04-01 2016-04-30 4
2016-05-01 2016-05-31 5
2016-06-01 2016-06-30 6
2016-07-01 2016-07-31 7
2016-08-01 2016-08-31 8
2016-09-01 2016-09-30 9
2016-10-01 2016-10-30 10
2016-11-01 2016-11-31 11
2016-12-01 2016-12-31 12

我试图用spark编写代码,但是没有用,导致联接上出现了cartisa产品,性能也很差

Df_A=spark.sql("select * from A join B where a.day>=b.start_date
     and a.day<=b.end_date and b.month_no=(I)") 

Actual Output应该在pyspark中具有代码,其中A联接B,每个月都需要处理。 I的值应与月份日期一起自动从1增加到12。 如上所示的Join B和使用ID以及性能的A Join C应该很好

1 个答案:

答案 0 :(得分:0)

from pyspark.sql import sparksession
from pyspark.sql import functions as F
from pyspark import HiveContext
hiveContext= HiveContext(sc)


def UDF_df(i):
print(i[0])
ABC2=spark.sql("select * From A where day where day    
='{0}'.format(i[0]))
Join=ABC2.join(Tab2.join(ABC2.ID == Tab2.ID))\
.select(Tab2.skey,ABC2.Day,ABC2.Name,ABC2.Description)
Join\
 .select("Tab2.skey","ABC2.Day","ABC2.Name","ABC2.Description")
 .write\
 .mode("append")\
 .format("parquet')\
 .insertinto("Table")
ABC=spark.sql("select distinct day from A where day<= ' 2016-01-01' and day<='2016-
12-31'")
Tab2=spark.sql("select * from B where day is not null)
for in in ABC.collect():
UDF_df(i)

The following query is working but taking a long time as the number of  
columns are around 60(just used sample 3). Also didn't join Table C as I 
wasn't sure how to join to avoid cartisan join. performance isn't good, am 
not sure how to optimise the query.