我的代码类似于:
sc = SparkContext()
ssc = StreamingContext(sc, 30)
initRDD = sc.parallelize('path_to_data')
lines = ssc.socketTextStream('localhost', 9999)
res = lines.transform(lambda x: x.join(initRDD))
res.pprint()
我的问题是initRDD
需要每天午夜更新。
我试着这样:
sc = SparkContext()
ssc = StreamingContext(sc, 30)
lines = ssc.socketTextStream('localhost', 9999)
def func(rdd):
initRDD = rdd.context.parallelize('path_to_data')
return rdd.join(initRDD)
res = lines.transform(func)
res.pprint()
但似乎initRDD
每30秒更新一次,与batchDuration
相同
有没有好的理想
答案 0 :(得分:3)
一种选择是检查transform
之前的截止日期。检查是一个简单的比较,因此在每个批次间隔都很便宜:
def nextDeadline() : Long = {
// assumes midnight on UTC timezone.
LocalDate.now.atStartOfDay().plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli()
}
// Note this is a mutable variable!
var initRDD = sparkSession.read.parquet("/tmp/learningsparkstreaming/sensor-records.parquet")
// Note this is a mutable variable!
var _nextDeadline = nextDeadline()
val lines = ssc.socketTextStream("localhost", 9999)
// we use the foreachRDD as a scheduling trigger.
// We don't use the data, only the execution hook
lines.foreachRDD{ _ =>
if (System.currentTimeMillis > _nextDeadline) {
initRDD = sparkSession.read.parquet("/tmp/learningsparkstreaming/sensor-records.parquet")
_nextDeadline = nextDeadline()
}
}
// if the rdd was updated, it will be picked up in this stage.
val res = lines.transform(rdd => rdd.join(initRDD))