静态df内部联接Pyspark中的结构化流df:pyspark.sql.utils.AnalysisException

时间:2020-03-03 03:34:09

标签: python join pyspark spark-streaming

以下是结构化流数据和统计数据帧的信息。

>>> statis_df 
DataFrame[uuid: string, rec_time: timestamp]
#structured streaming df
>>> df_2
DataFrame[key: string, rec_time: timestamp, devId: string]
joined_df = static_df.alias("static").join(df_2.alias("streaming"), col("static.uuid") == col("streaming.devId"), "inner")
S_joined_df = joined_df.writeStream.format("memory").queryName('joined_df').start()
spark.sql("select * from joined_df")

它返回了pyspark.sql.utils.AnalysisException: 'java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;'错误: enter image description here

0 个答案:

没有答案