用Scaf写的Kafka Consumer for Spark为Kafka API 0.10:自定义AVRO解串器

时间:2017-07-10 12:58:33

标签: scala apache-spark apache-kafka

我正在将Spark Scala App Kafka API升级到v.0.10。我曾经创建自定义方法来反序列化以字节字符串格式出现的消息。

我已经意识到有一种方法可以将StringDeserializer或ByteArrayDeserializer作为参数传递给键或值。

但是,我找不到有关如何创建自定义Avro架构反序列化程序的任何信息,因此我的kafkaStream可以在createDirectStream和使用Kafka的数据时使用它。

有可能吗?

1 个答案:

答案 0 :(得分:5)

有可能。您需要覆盖Deserializer<T>中定义的org.apache.kafka.common.serialization界面,您需要通过持有Kafka的key.deserializer类将value.deserializerConsumerStrategy[K, V]指向您的自定义类参数。例如:

import org.apache.kafka.common.serialization.Deserializer

class AvroDeserializer extends Deserializer[Array[Byte]] {
  override def configure(map: util.Map[String, _], b: Boolean): Unit = ???
  override def close(): Unit = ???
  override def deserialize(s: String, bytes: Array[Byte]): Array[Byte] = ???
}

然后:

import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import my.location.with.AvroDeserializer

val ssc: StreamingContext = ???
val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "localhost:9092,anotherhost:9092",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[AvroDeserializer],
  "group.id" -> "use_a_separate_group_id_for_each_stream",
  "auto.offset.reset" -> "latest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)

val topics = Array("sometopic")
val stream = KafkaUtils.createDirectStream[String, MyTypeWithAvroDeserializer](
  ssc,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)