使用者的配置如下。有时对于唯一ID,根本不会创建使用者组。我正在尝试根据应用程序名称使用消息。甚至使用者组的脚本也不会在列表中显示该特定使用者组。例如,对于给定application8的组ID根本没有创建,如下所示。
2019-11-14 14:09:27,719 INFO-Kafka版本:2.3.1 2019-11-14 14:09:27,719信息-Kafka commitId:18a913733fb71c01 2019-11-14 14:09:27,719信息-Kafka startTimeMs:1573720767718 2019-11-14 14:09:27,720信息-[消费者clientId = consumer-1,groupId = Application8]已订阅主题:配置 2019-11-14 14:09:27,955信息-[Consumer clientId = consumer-1,groupId = Application8]群集ID:h1TJ0oMkQYqO0z8ftlIzpA
public static void KafkaServerStart() throws IOException {
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.0.134:9092");
props.put("group.id", "Application8");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.put("partition.assignment.strategy", "org.apache.kafka.clients.consumer.RoundRobinAssignor");
props.put("enable.auto.commit", "true");
props.put("heartbeat.interval.ms", "3000");
props.put("session.timeout.ms", "9000");
props.put("auto.offset.reset","latest");
consumer = new KafkaConsumer<String, byte[]>(props);
consumer.subscribe(Collections.singletonList("config"), new RebalanceConfigListener());
final Thread mainThread = Thread.currentThread();
// Registering a shutdown hook so we can exit cleanly
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
System.out.println("Starting exit...");
// KafkaConsumers.consumer.commitSync(KafkaConsumers.currentOffsets);
// Note that shutdownhook runs in a separate thread, so the only thing we can
// safely do to a consumer is wake it up
consumer.wakeup();
try {
mainThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
try {
while (true) {
ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));
boolean commit = false;
for (ConsumerRecord<String, byte[]> record : records) {
/**
* Code for committing the offset on every iteration. Start.
*/
if (!commit)
commit = true;
/**
* Code for committing the offset on every iteration. End.
*/
// LiveDa.processData(record.key(), record.value(), record.offset(),
// record.partition());
Reinit.reInitMethod(new String(record.value()));
/*
* System.out.println("Key of the data " + record.key() + " ,values " + new
* String(record.value()) + " ,offset is " + record.offset() +
* " ,Partition ID " + record.partition());
*/
/**
* Code for committing the offset on every iteration. Start.
*/
currentOffsets.put(new TopicPartition(record.topic(), record.partition()),
new OffsetAndMetadata(record.offset() + 1, "no metadata"));
/**
* Code for committing the offset on every iteration. End.
*/
}
/**
* Code for committing the offset on every iteration. Start.
*/
if (commit)
consumer.commitAsync(currentOffsets, null);
/**
* Code for committing the offset on every iteration. End.
*/
}
} catch (Exception e) {
e.printStackTrace();
} finally {
// write logic on shutdown.
System.out.println("EXITING KAFKA");
/**
* Code for committing the offset on every iteration. Start.
*/
consumer.commitSync(currentOffsets);
/**
* Code for committing the offset on every iteration. End.
*/
consumer.close();
}
}
public static void main(String[] args) {
try {
KafkaConfigConsumer.KafkaServerStart();
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void run() {
try {
KafkaConfigConsumer.KafkaServerStart();
} catch (IOException e) {
SystemLogger.error(e);
}
}
答案 0 :(得分:0)
我解决了__consumer_offset主题有问题的问题,其中一个kafka节点已关闭,并且与该节点关联的分区处于无领导状态,因此在重置该主题后,此问题得以解决。