Spark结构化流给我带来了错误,因为org.apache.spark.sql.AnalysisException:'foreachBatch'不支持分区。

时间:2019-12-09 15:40:53

标签: apache-spark databricks spark-structured-streaming azure-databricks

我已经在Databricks中设计了以下结构化流代码,以写入Azure Data Lake:

def upsertToDelta(microBatchOutputDF: DataFrame, batchId: Long) {


  microBatchOutputDF.createOrReplaceTempView("updates")


  microBatchOutputDF.sparkSession.sql(s"""
   MERGE INTO silver as r
USING 
(
SELECT smtUidNr, dcl, inv, evt, smt, msgTs,msgInfSrcCd
FROM (
  SELECT smtUidNr, msgTs
  , RANK() OVER (PARTITION BY smtUidNr ORDER BY msgTs DESC) as rank
  , ROW_NUMBER() OVER (PARTITION BY smtUidNr ORDER BY msgTs DESC) as row_num
  FROM updates
  )
WHERE rank = 1 AND row_num = 1
)
as u
ON u.smtUidNr = r.smtUidNr 
WHEN MATCHED and u.msgTs > r.msgTs THEN
  UPDATE SET *
WHEN NOT MATCHED THEN
  INSERT *
  """)
}

splitDF.writeStream.format("delta").foreachBatch(upsertToDelta _).outputMode("append").partitionBy("year","month","day").option("checkpointLocation", "abfss://checkpoint@mcfdatalake.dfs.core.windows.net/kjd/test/").start("abfss://dump@mcfdatalake.dfs.core.windows.net/main_data/")

当我尝试执行此操作时,它给我以下错误:

org.apache.spark.sql.AnalysisException: 'foreachBatch' does not support partitioning;

将foreachBatch与分区一起使用的替代方法是什么?

1 个答案:

答案 0 :(得分:0)

  

将foreachBatch与分区一起使用的替代方法是什么?

foreachBatch内部使用分区。

您还可以将批处理写入Delta表,并在delta表上运行单独的查询以将其与其他表合并。