在应用map函数时,在同一个类上获取Kafka Streams类强制转换异常

时间:2018-04-16 01:25:11

标签: apache-kafka avro apache-kafka-streams spring-cloud-stream

UserRecord.java (由Maven Avro插件自动生成)

UserRecord extends SpecificRecordBase implement SpecificRecord

UserRecordSerde.java

UserRecordSerde extends SpecificAvroSerde

application.yml

spring.cloud.stream.bindings.input.destination: userTopic
spring.cloud.stream.bindings.input.consumer.useNativeDecoding: true
spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde:UserRecordSerde
spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde: LongSerdespring.cloud.stream.kafka.streams.binder.configuration.default.key.serde: LongSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde: SpecificAvroSerde

类 - StreamListener - 原始流在avro中带有null键和UserRecord对象

@StreamListener
        public KStream<Long, ArrayList<UserRecord>> handleUserRecords (@Input KStream<?, UserRecord> userRecordStream) { <br/>
        Map<String, Object> serdeConfig = new HashMap();
        serdeConfig.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081");
        serdeConfig.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true); <br/>
        Serde<ArrayList<UserRecord>> userRecordListSerde = new SpecificAvroSerde();
        userRecordListSerde.configure(serdeConfig, false); <br/>
        return userRecordStream
            .map((key, value) -> new KeyValue(value.getUserID, value)
            .groupByKey(Serialized.with(Serdes.Long(), userRecordSerde))
            .aggregate(ArrayList::new, Long key, UserRecord value, ArrayList agg ->
            {
               agg.add(value);
               return agg;
            }, userRecordListSerde)
        .toStream();
    }

例外

java.lang.ClassCastException: com.example.UserRecord cannot be cast to com.example.UserRecord
    at org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:41)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)
    at org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:42)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)

1 个答案:

答案 0 :(得分:3)

为什么不完全从配置中删除它? quotes.splice(randomNumber, 1); 并直接通过默认配置使用spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde:UserRecordSerdeSpecificAvroSerde