我让sql基本上将两个表连接起来并得到结果 accomm_sk ,如果 accomm_sk 的值为 NULL ,则spark UDF将获得调用如果没有,则在第三张表中查找,然后得到结果。由于Spark不允许注册为UDF,如何在spark sql中使用此功能?
火花UDF
def GeneratedAccommSk(localHash):
query = 'select accommodation_sk from staging.accomm_dim where accomm_hash="{}"'.format(localHash)
accommodationSk_Df=spark.sql(query)
accomm_count=accommSk_Df.filter(accommSk_Df.accomm_sk.isNotNull()).count()
if accomm_count != 0:
accomm_sk=accommSk_Df.select('accomm_sk').collect()[0].asDict()['accomm_sk']
else:
func = sc._gateway.jvm.RandomNumberGenerator()
accom_sk=func.generateRandomNumber().encode('ascii', 'ignore')
return accom_sk
Spark SQL:
rate_fact_df=spark.sql("""
*Calling GeneratedAccommSk UDF*
select case when accomm_sk IS NOT NULL THEN accommodation_sk
ELSE GeneratedAccommSk(a.accommhash) END
from
staging.contract_test a
join
dim.accomm_dim b
on (a.accomm_hash)= b.accommodation_hash
""")
答案 0 :(得分:0)
由于至少两个原因,该方法不起作用:
SparkSession
或任何分布式对象(DataFrame
,RDD
)根据accommSk_Df
的大小,您应该收集它并使用本地对象(Lookup in spark dataframes)或执行另一个连接。