我正在使用类似的代码向Kafka发送消息:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testo");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 1000; i++) {
producer.send(new ProducerRecord<>(
"topico",
String.format("{\"type\":\"test\", \"t\":%.3f, \"k\":%d}", System.nanoTime() * 1e-9, i)));
}
我想用Kafka Streams(0.10.0.1)计算最后一小时内的总消息数。我试过了:
final KStreamBuilder builder = new KStreamBuilder();
final KStream<String, String> metrics = builder.stream(Serdes.String(), Serdes.String(), "topico");
metrics.countByKey(TimeWindows.of("Hourly", 3600 * 1000)).mapValues(Object::toString).to("output");
我是Kafka / Streams的新手。我该怎么办?
答案 0 :(得分:1)
首先..你缺少这段代码来实际启动你的流程..
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
答案 1 :(得分:1)
要聚合两个流,可以使用join方法。 kstream中有different joins个可用
。例如:如果您想将kstream
与ktable
一起加入:
KStream<String, String> left = builder.stream("topic1");
KTable<String, String> right = builder.table("topic2");
left.leftjoin((right, (leftValue, rightValue) -> Customfunction(rightValue, leftValue))
最终启动kstream
streams = new KafkaStreams(topology, config);
streams.start();
答案 2 :(得分:1)
我对kafka流也很陌生,我不知道旧的api,但是对于新的(2.1.x)来说,这样的东西应该可以工作
kstream.mapValues((readOnlyKey, value) -> "test")
.groupByKey()
.windowedBy(TimeWindows.of(1000 * 60))
.count()
.toStream()
.selectKey((key, value) -> Instant.ofEpochMilli(key.window().end())
.truncatedTo(ChronoUnit.HOURS).toEpochMilli())
.groupByKey(Serialized.with(Serdes.Long(), Serdes.Long())).reduce((reduce, newVal) -> reduce + newVal)
.toStream().peek((key, value) -> log.info("{}={}",key,value));