Kafka Consumer仅在两个消息堆叠时才读取消息

时间:2019-10-11 14:13:44

标签: java kafka-consumer-api

我们有一个kafka生产者,偶尔会产生一些消息。

我写了一个Consumer来使用这些消息。问题是,仅当其中两个堆栈时,消息才会被使用。例如,如果一条消息是在13:00产生的,则消费者不执行任何操作。如果在13:01产生了另一个消息,则使用者将同时使用这两个消息。在kafkaTool的使用者属性中,存在一个称为LAG的列,当不使用该消息时,该列为1。 我缺少此东西的任何配置吗?

使用者配置:

16:43:04,472 INFO  [org.apache.kafka.clients.consumer.ConsumerConfig] (http--0.0.0.0-8180-1) ConsumerConfig values:
        request.timeout.ms = 180001
        check.crcs = true
        retry.backoff.ms = 100
        ssl.truststore.password = null
        ssl.keymanager.algorithm = SunX509
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.key.password = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.provider = null
        sasl.kerberos.service.name = null
        session.timeout.ms = 180000
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [mtxbuctra22.prod.orange.intra:9092]
        client.id =
        fetch.max.wait.ms = 180000
        fetch.min.bytes = 1024
        key.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        auto.offset.reset = earliest
        value.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
        ssl.endpoint.identification.algorithm = null
        max.partition.fetch.bytes = 1048576
        ssl.keystore.location = null
        ssl.truststore.location = null
        ssl.keystore.password = null
        metrics.sample.window.ms = 30000
        metadata.max.age.ms = 300000
        security.protocol = PLAINTEXT
        auto.commit.interval.ms = 1000
        ssl.protocol = TLS
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.trustmanager.algorithm = PKIX
        group.id = ifd_006
        enable.auto.commit = true
        metric.reporters = []
        ssl.truststore.type = JKS
        send.buffer.bytes = 131072
        reconnect.backoff.ms = 50
        metrics.num.samples = 2
        ssl.keystore.type = JKS
        heartbeat.interval.ms = 3000

16:43:04,493 INFO  [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
        max.schemas.per.subject = 1000
        specific.avro.reader = true
        schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]

16:43:04,498 INFO  [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
        max.schemas.per.subject = 1000
        specific.avro.reader = true
        schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]

Kafka工具: enter image description here

1 个答案:

答案 0 :(得分:0)

弄清楚了。 在kafka 0.9.0.1的文档中,它声明fetch.min.bytes为1。但是我有kafka 0.9.0.0。并且默认值为1024。因此,仅在2条消息之后才传递此值。将fetch.min.bytes更改为1,现在可以正常使用了。