我的场景
问题:
inData = spark.readstream().format("eventhub")
udfdata = indata.select(from_json(myudf("column"), schema)).as("result").select(result.*)
filter1 = udfdata.filter("column =='filter1'")
filter 2 = udfdata.filter("column =='filter2'")
# write filter1 to two differnt sinks
filter1.writestream().format(delta).start(table1)
filter1.writestream().format(eventhub).start()
# write filter2 to two differnt sinks
filter2.writestream().format(delta).start(table2)
filter2.writestream().format(eventhub).start()
答案 0 :(得分:0)
每次调用.writestream()....start()
时,您都在创建一个新的独立流查询。
这意味着对于您定义的每个输出接收器,Spark将从输入源中再次读取并处理数据帧。
如果您只想读取和处理一次然后输出到多个接收器,则可以使用 foreachBatch 接收器作为解决方法:
inData = spark.readstream().format("eventhub")
udfdata = indata.select(from_json(myudf("column"), schema)).as("result").select(result.*)
udfdata.writeStream().foreachBatch(filter_and_output).start()
def filter_and_output(udfdata, batchId):
# At this point udfdata is a batch dataframe, no more a streaming dataframe
udfdata.cache()
filter1 = udfdata.filter("column =='filter1'")
filter2 = udfdata.filter("column =='filter2'")
# write filter1
filter1.write().format(delta).save(table1)
filter1.write().format(eventhub).save()
# write filter2
filter2.write().format(delta).save(table2)
filter2.write().format(eventhub).save()
udfdata.unpersist()
您可以在Spark Structured Streaming documentation中了解有关foreachBatch的更多信息。
回答您的问题