Kafka从给定主题流式处理KTable创建

时间:2018-11-27 08:52:45

标签: java apache-kafka-streams

我正在做一个项目,并且卡在KTable上。

我想从一个主题中获取记录并将其放入KTable(存储)中,这样我就为1个键保留了1条记录。

---
version: '2'
services:
zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000

kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
    - zookeeper
    ports:
    - 9092:9092
    environment:
    KAFKA_BROKER_ID: 1
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
    KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
    KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

flaskapp:
    build: ./flask-app
    container_name: flask_dev
    ports:
    - '9000:5000'
    volumes:
    - ./flask-app:/app

这是我所认为的最接近的答案。

1 个答案:

答案 0 :(得分:2)

您的方法是正确的,但是您需要使用正确的Serdes。

.reduce()函数中,值类型应为byte[]

 KStream<Long, byte[]> streamed = builder.stream(topicName, Consumed.with(longSerde, byteSerde));
 KTable<Long, byte[]> records = streamed.groupByKey().reduce(
            new Reducer<byte[]>() {
                @Override
                public byte[] apply(byte[] aggValue, byte[] newValue) {
                    return newValue;
                }
            }, 
            Materialized.as(storename).with(longSerde,byteSerde));