I need to inject events saved to HDFS during online Kafka streaming back to DStream PySpark to undergo same algorithms processing. I found code example of Holden Karau that is "equivalent to a checkpointable, replayable, reliable message queue like Kafka". I wonder if it is possible to implement it in PySpark:
package com.holdenkarau.spark.testing
import org.apache.spark.streaming._
import org.apache.spark._
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import scala.language.implicitConversions
import scala.reflect.ClassTag
import org.apache.spark.streaming.dstream.FriendlyInputDStream
/**
* This is a input stream just for the testsuites. This is equivalent to a
* checkpointable, replayable, reliable message queue like Kafka.
* It requires a sequence as input, and returns the i_th element at the i_th batch
* under manual clock.
*
* Based on TestInputStream class from TestSuiteBase in the Apache Spark project.
*/
class TestInputStream[T: ClassTag](@transient var sc: SparkContext,
ssc_ : StreamingContext, input: Seq[Seq[T]], numPartitions: Int)
extends FriendlyInputDStream[T](ssc_) {
def start() {}
def stop() {}
def compute(validTime: Time): Option[RDD[T]] = {
logInfo("Computing RDD for time " + validTime)
val index = ((validTime - ourZeroTime) / slideDuration - 1).toInt
val selectedInput = if (index < input.size) input(index) else Seq[T]()
// lets us test cases where RDDs are not created
Option(selectedInput).map{si =>
val rdd = sc.makeRDD(si, numPartitions)
logInfo("Created RDD " + rdd.id + " with " + selectedInput)
rdd
}
}
}
答案 0 :(得分:0)
Spark提供了两个内置DStream
实现,可用于测试,在大多数情况下,您不需要任何外部实现。
第二个,以简化形式,在PySpark中可用 - pyspark.streaming.StreamingContext.queueStream
:
ssc = StreamingContext(sc)
ssc.queueStream([
sc.range(0, 1000),
sc.range(1000, 2000),
sc.range(2000, 3000)
])
如果还不够,您可以始终使用新线程将序列化数据原子地写入文件系统,并使用标准的基于文件的DStream
从那里读取。