我写了一个java程序来消耗来自kafka的消息。我想监视消耗滞后,如何通过java获取它?
BTW,我用:
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.1.1</version>
提前致谢。
答案 0 :(得分:6)
如果您不想在项目中包含kafka(和scala)依赖项,可以使用下面的类。它仅使用kafka-clients依赖项。
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.BinaryOperator;
import java.util.stream.Collectors;
public class KafkaConsumerMonitor {
public static class PartionOffsets {
private long endOffset;
private long currentOffset;
private int partion;
private String topic;
public PartionOffsets(long endOffset, long currentOffset, int partion, String topic) {
this.endOffset = endOffset;
this.currentOffset = currentOffset;
this.partion = partion;
this.topic = topic;
}
public long getEndOffset() {
return endOffset;
}
public long getCurrentOffset() {
return currentOffset;
}
public int getPartion() {
return partion;
}
public String getTopic() {
return topic;
}
}
private final String monitoringConsumerGroupID = "monitoring_consumer_" + UUID.randomUUID().toString();
public Map<TopicPartition, PartionOffsets> getConsumerGroupOffsets(String host, String topic, String groupId) {
Map<TopicPartition, Long> logEndOffset = getLogEndOffset(topic, host);
KafkaConsumer consumer = createNewConsumer(groupId, host);
BinaryOperator<PartionOffsets> mergeFunction = (a, b) -> {
throw new IllegalStateException();
};
Map<TopicPartition, PartionOffsets> result = logEndOffset.entrySet()
.stream()
.collect(Collectors.toMap(
entry -> (entry.getKey()),
entry -> {
OffsetAndMetadata committed = consumer.committed(entry.getKey());
return new PartionOffsets(entry.getValue(), committed.offset(), entry.getKey().partition(), topic);
}, mergeFunction));
return result;
}
public Map<TopicPartition, Long> getLogEndOffset(String topic, String host) {
Map<TopicPartition, Long> endOffsets = new ConcurrentHashMap<>();
KafkaConsumer<?, ?> consumer = createNewConsumer(monitoringConsumerGroupID, host);
List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
List<TopicPartition> topicPartitions = partitionInfoList.stream().map(pi -> new TopicPartition(topic, pi.partition())).collect(Collectors.toList());
consumer.assign(topicPartitions);
consumer.seekToEnd(topicPartitions);
topicPartitions.forEach(topicPartition -> endOffsets.put(topicPartition, consumer.position(topicPartition)));
consumer.close();
return endOffsets;
}
private static KafkaConsumer<?, ?> createNewConsumer(String groupId, String host) {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, host);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new KafkaConsumer<>(properties);
}
}
答案 1 :(得分:2)
我个人直接从我的消费者那里查询jmx信息。我只在java中使用JMX bean:kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*/records-lag-max
可用。
如果jolokia在您的类路径中,您可以使用/jolokia/read/kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*/records-lag-max
上的GET检索值,并将所有结果收集到一个位置。
还有Burrow,它很容易配置,但它有点过时(如果我记得很清楚,不适用于0.10。)
答案 2 :(得分:2)
我使用Spring作为我的api。使用下面的代码,您可以通过java获取指标。代码可以工作。
conda update qt pyqt
答案 3 :(得分:1)
尝试使用AdminClient#listGroupOffsets(groupID)来检索与使用者组相关联的所有主题分区的偏移量。例如:
AdminClient client = AdminClient.createSimplePlaintext("localhost:9092");
Map<TopicPartition, Object> offsets = JavaConversions.asJavaMap(
client.listGroupOffsets("groupID"));
Long offset = (Long) offsets.get(new TopicPartition("topic", 0));
...
修改强>:
上面的代码段显示了如何获取给定分区的已提交偏移量。下面的代码显示了如何检索分区的LEO。
public long getLogEndOffset(TopicPartition tp) {
KafkaConsumer consumer = createNewConsumer();
Collections.singletonList(tp);
consumer.assign(Collections.singletonList(tp));
consumer.seekToEnd(Collections.singletonList(tp));
return consumer.position(tp);
}
private KafkaConsumer<String, String> createNewConsumer() {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "g1");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
return new KafkaConsumer(properties);
}
调用getLogEndOffset
返回给定分区的LEO,然后从中减去提交的偏移量,结果就是滞后。
答案 4 :(得分:1)
您可以在创建消费者时设置 SetStatisticsHandler 回调函数。 比如c#代码如下
var config = new ConsumerConfig()
{
BootstrapServers = entrypoints,
GroupId = groupid,
EnableAutoCommit = false,
StatisticsIntervalMs=1000 // statistics interval time
};
var consumer = new ConsumerBuilder<Ignore, byte[]>( config )
.SetStatisticsHandler((consumer,json)=> {
logger.LogInformation( json ); // statistics metrics, include consumer lag
} )
.Build();
有关详细信息,请参阅 STATISTICS.md 中的统计指标。
答案 5 :(得分:0)
供您参考,我使用以下代码完成了此操作。基本上,您必须通过计算当前提交的偏移量和结束偏移量之间的增量来手动计算每个主题分区的延迟。
handleUpload(e) {
console.log(e.target.files);
}
请注意,一个消费者群体可能会同时消费多个主题,因此,如果您需要每个主题的滞后时间,则必须按主题对结果进行分组和汇总。
private static Map<TopicPartition, Long> lagOf(String brokers, String groupId) {
Properties props = new Properties();
props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokers);
try (AdminClient client = AdminClient.create(props)) {
ListConsumerGroupOffsetsResult currentOffsets = client.listConsumerGroupOffsets(groupId);
try {
// get current offsets of consuming topic-partitions
Map<TopicPartition, OffsetAndMetadata> consumedOffsets = currentOffsets.partitionsToOffsetAndMetadata()
.get(3, TimeUnit.SECONDS);
final Map<TopicPartition, Long> result = new HashMap<>();
doWithKafkaConsumer(groupId, brokers, (c) -> {
// get latest offsets of consuming topic-partitions
// lag = latest_offset - current_offset
Map<TopicPartition, Long> endOffsets = c.endOffsets(consumedOffsets.keySet());
result.putAll(endOffsets.entrySet().stream().collect(Collectors.toMap(entry -> entry.getKey(),
entry -> entry.getValue() - consumedOffsets.get(entry.getKey()).offset())));
});
return result;
} catch (InterruptedException | ExecutionException | TimeoutException e) {
log.error("", e);
return Collections.emptyMap();
}
}
}
public static void doWithKafkaConsumer(String groupId, String brokers,
Consumer<KafkaConsumer<String, String>> consumerRunner) {
Properties props = new Properties();
props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokers);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
try (final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
consumerRunner.accept(consumer);
}
}
答案 6 :(得分:0)
运行此独立代码。 (依赖于kafka-clients-2.6.0.jar)
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.BinaryOperator;
import java.util.stream.Collectors;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;
public class CosumerGroupLag {
static String host = "localhost:9092";
static String topic = "topic02";
static String groupId = "test-group";
public static void main(String... vj) {
CosumerGroupLag cgl = new CosumerGroupLag();
while (true) {
Map<TopicPartition, PartionOffsets> lag = cgl.getConsumerGroupOffsets(host, topic, groupId);
System.out.println("$$LAG = " + lag);
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
private final String monitoringConsumerGroupID = "monitoring_consumer_" + UUID.randomUUID().toString();
public Map<TopicPartition, PartionOffsets> getConsumerGroupOffsets(String host, String topic, String groupId) {
Map<TopicPartition, Long> logEndOffset = getLogEndOffset(topic, host);
Set<TopicPartition> topicPartitions = new HashSet<>();
for (Entry<TopicPartition, Long> s : logEndOffset.entrySet()) {
topicPartitions.add(s.getKey());
}
KafkaConsumer<String, Object> consumer = createNewConsumer(groupId, host);
Map<TopicPartition, OffsetAndMetadata> comittedOffsetMeta = consumer.committed(topicPartitions);
BinaryOperator<PartionOffsets> mergeFunction = (a, b) -> {
throw new IllegalStateException();
};
Map<TopicPartition, PartionOffsets> result = logEndOffset.entrySet().stream()
.collect(Collectors.toMap(entry -> (entry.getKey()), entry -> {
OffsetAndMetadata committed = comittedOffsetMeta.get(entry.getKey());
long currentOffset = 0;
if(committed != null) { //committed offset will be null for unknown consumer groups
currentOffset = committed.offset();
}
return new PartionOffsets(entry.getValue(), currentOffset, entry.getKey().partition(), topic);
}, mergeFunction));
return result;
}
public Map<TopicPartition, Long> getLogEndOffset(String topic, String host) {
Map<TopicPartition, Long> endOffsets = new ConcurrentHashMap<>();
KafkaConsumer<?, ?> consumer = createNewConsumer(monitoringConsumerGroupID, host);
List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
List<TopicPartition> topicPartitions = partitionInfoList.stream()
.map(pi -> new TopicPartition(topic, pi.partition())).collect(Collectors.toList());
consumer.assign(topicPartitions);
consumer.seekToEnd(topicPartitions);
topicPartitions.forEach(topicPartition -> endOffsets.put(topicPartition, consumer.position(topicPartition)));
consumer.close();
return endOffsets;
}
private static KafkaConsumer<String, Object> createNewConsumer(String groupId, String host) {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, host);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new KafkaConsumer<>(properties);
}
private static class PartionOffsets {
private long lag;
private long timestamp = System.currentTimeMillis();
private long endOffset;
private long currentOffset;
private int partion;
private String topic;
public PartionOffsets(long endOffset, long currentOffset, int partion, String topic) {
this.endOffset = endOffset;
this.currentOffset = currentOffset;
this.partion = partion;
this.topic = topic;
this.lag = endOffset - currentOffset;
}
@Override
public String toString() {
return "PartionOffsets [lag=" + lag + ", timestamp=" + timestamp + ", endOffset=" + endOffset
+ ", currentOffset=" + currentOffset + ", partion=" + partion + ", topic=" + topic + "]";
}
}
}