为kafka主题分配新分区(旧的已撤销)后,Spark Streaming作业失败:没有分区topic1

时间:2018-03-23 15:15:29

标签: apache-spark apache-kafka spark-streaming offset kafka-consumer-api

使用kafka的spark streaming并使用下面的代码创建直接流 -

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> conf.getString("kafka.brokers"),
  "zookeeper.connect" -> conf.getString("kafka.zookeeper"),
  "group.id" -> conf.getString("kafka.consumergroups"),
  "auto.offset.reset" -> args { 1 },
  "enable.auto.commit" -> (conf.getString("kafka.autoCommit").toBoolean: java.lang.Boolean),
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "security.protocol" -> "SASL_PLAINTEXT",
  "session.timeout.ms" -> args { 2 },
  "max.poll.records" -> args { 3 },
  "request.timeout.ms" -> args { 4 },
  "fetch.max.wait.ms" -> args { 5 })

val messages = KafkaUtils.createDirectStream[String, String](
  ssc,
  LocationStrategies.PreferConsistent,
  ConsumerStrategies.Subscribe[String, String](topicsSet, kafkaParams))

经过一些处理后,我们使用commitAsync API提交偏移量。

try
{
messages.foreachRDD { rdd =>
  val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
  messages.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}
}
catch
{   
 case e:Throwable => e.printStackTrace()
}

错误导致作业崩溃 -

            18/03/20 10:43:30 INFO ConsumerCoordinator: Revoking previously assigned partitions [TOPIC_NAME-3, TOPIC_NAME-5, TOPIC_NAME-4] for group 21_feb_reload_2
            18/03/20 10:43:30 INFO AbstractCoordinator: (Re-)joining group 21_feb_reload_2
            18/03/20 10:43:30 INFO AbstractCoordinator: (Re-)joining group 21_feb_reload_2
            18/03/20 10:44:00 INFO AbstractCoordinator: Successfully joined group 21_feb_reload_2 with generation 20714
            18/03/20 10:44:00 INFO ConsumerCoordinator: Setting newly assigned partitions [TOPIC_NAME-1, TOPIC_NAME-0, TOPIC_NAME-2] for group 21_feb_reload_2
            18/03/20 10:44:00 ERROR JobScheduler: Error generating jobs for time 1521557010000 ms
            java.lang.IllegalStateException: No current assignment for partition TOPIC_NAME-4
                at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:251)
                at org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:315)
                at org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1170)
                at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197)
                at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
                at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
                at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
                at scala.Option.orElse(Option.scala:289)
                at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
                at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
                at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
                at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
                at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
                at scala.Option.orElse(Option.scala:289)
                at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
                at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
                at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
                at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
                at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
                at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
                at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
                at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
                at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
                at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
                at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
                at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
                at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
                at scala.util.Try$.apply(Try.scala:192)
                at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
                at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
                at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
                at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
                at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
            18/03/20 10:44:00 ERROR ApplicationMaster: User class threw exception: java.lang.IllegalStateException: No current assignment for partition 

我的发现 -

1-帖子中的类似问题 - Kafka Spark Stream throws Exception:No current assignment for partition 这并没有解释为什么使用Assign而不是Subscribe。

2-为了确保没有重新平衡,我将session.timeout.ms增加到几乎我的批处理持续时间,因为我的处理在不到2分钟(批处理持续时间)内完成。

session.timeout.ms- 消费者在被认为还活着的同时可能与经纪人脱离接触的时间 (https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html

3-通过方法找到了重新平衡监听器 - 一个onPartitionsRevoked b onPartitionsAssigned

但是我无法理解如何在重新平衡之前使用第一个提交偏移量的那个。

非常感谢任何输入。

2 个答案:

答案 0 :(得分:0)

我也遇到过同样的问题。当我的两个Spark作业使用相同的kafka client.id时,因此我已将新的kafka客户端分配给另一个作业

答案 1 :(得分:-1)

使用kafka指南的Spark流式传输建议为每个火花流式传输作业使用不同的组ID。在与我的团队讨论之后,我们现在正在采用这种方法,为所有火花流工作使用不同的组ID。

现在运行一周,我们没有看到这个错误。

https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html