使用kafka 0.11.0.x

时间:2017-12-20 09:24:32

标签: apache-kafka kafka-consumer-api

我正在使用Kafka(版本0.11.0.2)服务器API在localhost中启动kafka代理。因为它运行没有任何问题.Producer也可以发送消息成功。但消费者无法获取消息并且控制台中没有任何错误日志。所以我调试了代码并循环了“刷新元数据”

这是源代码

while (coordinatorUnknown()) {
        RequestFuture<Void> future = lookupCoordinator();
        client.poll(future, remainingMs);

        if (future.failed()) {
            if (future.isRetriable()) {
                remainingMs = timeoutMs - (time.milliseconds() - startTimeMs);
                if (remainingMs <= 0)
                    break;

                log.debug("Coordinator discovery failed for group {}, refreshing metadata", groupId);
                client.awaitMetadataUpdate(remainingMs);
            } else
                throw future.exception();
        } else if (coordinator != null && client.connectionFailed(coordinator)) {
            // we found the coordinator, but the connection has failed, so mark
            // it dead and backoff before retrying discovery
            coordinatorDead();
            time.sleep(retryBackoffMs);
        }

        remainingMs = timeoutMs - (time.milliseconds() - startTimeMs);
        if (remainingMs <= 0)
            break;
    }

Adddtion:我将Kafka版本更改为 0.10.x ,运行正常。

这是我的Kafka服务器代码。

   private static void startKafkaLocal() throws Exception {
    final File kafkaTmpLogsDir = File.createTempFile("zk_kafka", "2");
    if (kafkaTmpLogsDir.delete() && kafkaTmpLogsDir.mkdir()) {
        Properties props = new Properties();
        props.setProperty("host.name", KafkaProperties.HOSTNAME);
        props.setProperty("port", String.valueOf(KafkaProperties.KAFKA_SERVER_PORT));
        props.setProperty("broker.id", String.valueOf(KafkaProperties.BROKER_ID));
        props.setProperty("zookeeper.connect", KafkaProperties.ZOOKEEPER_CONNECT);
        props.setProperty("log.dirs", kafkaTmpLogsDir.getAbsolutePath());
        //advertised.listeners=PLAINTEXT://xxx.xx.xx.xx:por
  // flush every message.

        // flush every 1ms
        props.setProperty("log.default.flush.scheduler.interval.ms", "1");
        props.setProperty("log.flush.interval", "1");
        props.setProperty("log.flush.interval.messages", "1");
        props.setProperty("replica.socket.timeout.ms", "1500");
        props.setProperty("auto.create.topics.enable", "true");
        props.setProperty("num.partitions", "1");

        KafkaConfig kafkaConfig = new KafkaConfig(props);

        KafkaServerStartable kafka = new KafkaServerStartable(kafkaConfig);
        kafka.startup();
        System.out.println("start kafka ok "+kafka.serverConfig().numPartitions());
    }
}

感谢。

1 个答案:

答案 0 :(得分:1)

使用kafka 0.11时,如果将num.partitions设置为1,则还需要设置以下3个设置:

offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

运行0.11时,从服务器日志中可以看出这一点。