如果Kafka Consumer失败(Spark Job),如何获取Kafka Consumer提交的最后一个偏移量。 (斯卡拉)

时间:2018-08-14 15:33:59

标签: scala apache-spark apache-kafka kafka-consumer-api

在提供任何详细信息之前,请注意,我询问如何使用kafka-run-class.sh kafka.tools.ConsumerOffsetChecker从控制台获取最新的偏移量。

我正在尝试使用Scala(2.11.8)在Spark(2.3.1)中创建kafka使用者(kafka版本0.10)。容错,我的意思是,如果由于某种原因,kafka用户死亡并重新启动,它应该继续使用最后一个偏移量中的消息。

为实现这一点,我使用以下代码在消耗了Kafka偏移量后,即提交了该

    val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "group_101",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean), /*because messages successfully polled by the consumer may not yet have resulted in a Spark output operation*/
"session.timeout.ms" -> (30000: java.lang.Integer),
"heartbeat.interval.ms" -> (3000: java.lang.Integer)
)

val topic = Array("topic_1")

val offsets = Map(new org.apache.kafka.common.TopicPartition("kafka_cdc_1", 0) -> 2L) /*Edit: Added code to fetch offset*/

val kstream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topic, kafkaParams, offsets)  /*Edit: Added offset*/ 
)

kstream.foreachRDD{ rdd =>
val offsetRange = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
if(!rdd.isEmpty()) {
  val rawRdd = rdd.map(record => 
 (record.key(),record.value())).map(_._2).toDS()
  val df = spark.read.schema(tabSchema).json(rawRdd)
  df.createOrReplaceTempView("temp_tab")
  df.write.insertInto("hive_table")
}
kstream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRange) /*Doing Async Commit Here */
}

我已经尝试了很多方法来获取给定主题的最新偏移,但是无法使其正常工作。

请问有人可以通过scala代码帮助我吗?

编辑: 在上面的代码中,我试图通过使用

来获取最后的偏移量
val offsets = Map(new org.apache.kafka.common.TopicPartition("kafka_cdc_1", 0) -> 2L) /*Edit: Added code to fetch offset*/

,但是上面的代码获取的偏移量是0,而不是最新的。反正有获取最新的偏移量吗?

1 个答案:

答案 0 :(得分:0)

找到了解决上述问题的方法。这里是。希望它能帮助有需要的人。

语言:Scala,Spark Job

val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "group_101",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean), /*because messages successfully polled by the consumer may not yet have resulted in a Spark output operation*/
"session.timeout.ms" -> (30000: java.lang.Integer),
"heartbeat.interval.ms" -> (3000: java.lang.Integer)
)

import java.util.Properties

//create a new properties object with Kafaka Parameters as done previously. Note: Both needs to be present. We will use the proprty object just to fetch the last offset

val kafka_props = new Properties()
kafka_props.put("bootstrap.servers", "localhost:9092")
kafka_props.put("key.deserializer",classOf[StringDeserializer])
kafka_props.put("value.deserializer",classOf[StringDeserializer])
kafka_props.put("group.id","group_101")
kafka_props.put("auto.offset.reset","latest")
kafka_props.put("enable.auto.commit",(false: java.lang.Boolean))
kafka_props.put("session.timeout.ms",(30000: java.lang.Integer))
kafka_props.put("heartbeat.interval.ms",(3000: java.lang.Integer))

val topic = Array("topic_1")

/*val offsets = Map(new org.apache.kafka.common.TopicPartition("topic_1", 0) -> 2L) Edit: Added code to fetch offset*/

val topicAndPartition = new org.apache.kafka.common.TopicPartition("topic_1", 0) //Using 0 as the partition because this topic does not have any partitions
val consumer = new KafkaConsumer[String,String](kafka_props)    //create a 2nd consumer to fetch last offset
import java.util
consumer.subscribe(util.Arrays.asList("topic_1"))   //Subscribe to the 2nd consumer. Without this step, the offsetAndMetadata can't be fetched.
val offsetAndMetadata = consumer.committed(topicAndPartition)    //Find last committed offset for the given topicAndPartition
val endOffset = offsetAndMetadata.offset().toLong   //fetch the last committed offset from offsetAndMetadata and cast it to Long data type.

val fetch_from_offset = Map(new org.apache.kafka.common.TopicPartition("topic_1", 0) -> endOffset) // create a Map with data type (TopicPartition, Long)

val kstream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topic, kafkaParams, fetch_from_offset) //Pass the offset Map of datatype (TopicPartition, Long) created eariler
)

kstream.foreachRDD{ rdd =>
val offsetRange = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
if(!rdd.isEmpty()) {
  val rawRdd = rdd.map(record => 
 (record.key(),record.value())).map(_._2).toDS()
  val df = spark.read.schema(tabSchema).json(rawRdd)
  df.createOrReplaceTempView("temp_tab")
  df.write.insertInto("hive_table")
}
kstream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRange) /*Doing Async offset Commit Here */
}