spark-streaming-kafka-0-10 auto.offset.reset始终设置为none

时间:2016-11-11 11:19:45

标签: apache-kafka spark-streaming

在分配auto.offset.reset->"最新"时,是否有人遇到此问题?不会影响spark-streaming-kafka 0-10中的这个属性

这是我的代码:

 val config = StreamingConfigHelper.getStreamingConfig()
val kafkaParams = Map[String, Object]("bootstrap.servers" -> config.brokers,
  "key.deserializer" -> classOf[ByteArrayDeserializer],
  "value.deserializer" -> classOf[ByteArrayDeserializer],
  "group.id" -> "prodgroup",
  "auto.offset.reset" -> "latest",
  "receive.buffer.bytes" -> (65536: java.lang.Integer),
  "enable.auto.commit" -> (false: java.lang.Boolean))
val inputDStream = KafkaUtils.createDirectStream[Array[Byte], Array[Byte]](streamingContext, LocationStrategies.PreferConsistent,
  ConsumerStrategies.Subscribe[Array[Byte], Array[Byte]](config.productTopic.toArray, kafkaParams))

但是当我部署时,我得到了这个结果

16/11/11 11:03:00 INFO ConsumerConfig: ConsumerConfig values: 
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [xxxxxxxxx:6667, xxxxxxxx:6667, xxxxxxxxxx:6667]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = 
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = spark-executor-prodgroup1
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = none
你可以帮忙吗? 感谢

2 个答案:

答案 0 :(得分:0)

我不确定为什么忽略该设置但你也可以尝试不设置auto.offset.reset,因为新的消费者配置中的默认值是“最新的”。 信息来源: https://kafka.apache.org/documentation#newconsumerconfigs

答案 1 :(得分:0)

您的设置在KafkaUtils.fixKafkaParams中被覆盖: 我不知道为什么......