Spark流式传输和kafka缺少必需的配置“ partition.assignment.strategy”,该配置没有默认值

时间:2019-03-13 15:13:29

标签: apache-spark apache-kafka spark-streaming spark-streaming-kafka

我正在尝试使用yarn在Kafka上运行Spark Streaming应用程序。我收到以下堆栈跟踪错误-

  

由以下原因引起:org.apache.kafka.common.config.ConfigException:缺少没有默认值的必需配置“ partition.assignment.strategy”。       在org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)       在org.apache.kafka.common.config.AbstractConfig。(AbstractConfig.java:48)       在org.apache.kafka.clients.consumer.ConsumerConfig(ConsumerConfig.java:194)       在org.apache.kafka.clients.consumer.KafkaConsumer。(KafkaConsumer.java:380)       在org.apache.kafka.clients.consumer.KafkaConsumer。(KafkaConsumer.java:363)       在org.apache.kafka.clients.consumer.KafkaConsumer。(KafkaConsumer.java:350)       在org.apache.spark.streaming.kafka010.CachedKafkaConsumer。(CachedKafkaConsumer.scala:45)       在org.apache.spark.streaming.kafka010.CachedKafkaConsumer $ .get(CachedKafkaConsumer.scala:194)       在org.apache.spark.streaming.kafka010.KafkaRDDIterator。(KafkaRDD.scala:252)       在org.apache.spark.streaming.kafka010.KafkaRDD.compute(KafkaRDD.scala:212)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)       在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)       在org.apache.spark.scheduler.Task.run(Task.scala:109)       在org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:345)

这是我如何使用Spark Stream创建KafkaStream的代码的片段-

        val ssc = new StreamingContext(sc, Seconds(60))

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "*boorstrap_url:port*",
  "security.protocol" -> "SASL_PLAINTEXT",
  "sasl.kerberos.service.name" -> "kafka",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> "annotation-test",
  //Tried commenting and uncommenting this property      
  //"partition.assignment.strategy"->"org.apache.kafka.clients.consumer.RangeAssignor",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> (false: java.lang.Boolean))

val topics = Array("*topic-name*")

val kafkaStream = KafkaUtils.createDirectStream[String, String](
  ssc,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams))
val valueKafka = kafkaStream.map(record => record.value())

我经历了以下帖子-

  1. https://issues.apache.org/jira/browse/KAFKA-4547
  2. Pyspark Structured Streaming Kafka configuration error

据此,我已将胖罐中的kafka util jar从 0.10.1.0 版本更新为 0.10.2.0 版本,默认情况下已从spark-stream-kafka- jar作为临时依赖项。通过将master设置为local在单节点上运行它时,我的工作也很好。我正在运行spark 2.3.1版本。

1 个答案:

答案 0 :(得分:0)

kafka-clients-*.jar添加到您的Spark jar文件夹中。 kafka-clients-*.jarkafka-*/lib目录中。