spring-cloud-stream消息转换异常

时间:2018-03-21 14:33:52

标签: spring-cloud-stream

在将我们的某个服务升级到spring-cloud-stream 2.0.0.RC3时,我们在尝试使用由使用旧版spring-cloud-stream的服务生成的消息时遇到异常 - Ditmars.RELEASE :

  

ERROR 31241 --- [container-4-C-1] osintegration.handler.LoggingHandler:org.springframework.messaging.converter.MessageConversionException:无法从[[B]转换为[com.watercorp.messaging.types .incoming.UsersDeletedMessage] for GenericMessage [payload = byte [371],headers = {kafka_offset = 1,kafka_consumer =org.apache.kafka.clients.consumer.KafkaConsumer @ 62029d0d,kafka_timestampType = CREATE_TIME,message_id = 1645508761,id = f4e947de- 22e6-b629-229b-4fa961c73f2d,type = USERS_DELETED,kafka_receivedPartitionId = 4,contentType = text / plain,kafka_receivedTopic = user,kafka_receivedTimestamp = 1521641760698,timestamp = 1521641772477}],failedMessage = GenericMessage [payload = byte [371],headers = { kafka_offset = 1,kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer@62029d0d,kafka_timestampType = CREATE_TIME,MESSAGE_ID = 1645508761,ID = f4e947de-22e6-b629-229b-4fa961c73f2d,类型= USERS_DELETED,kafka_receivedPartitionId = 4,则contentType = text / plain,kafka_receivedTopic = user,kafka_receivedTimesta mp = 1521641760698,时间戳= 1521641772477}]       在org.springframework.messaging.handler.annotation.support.PayloadArgumentResolver.resolveArgument(PayloadArgumentResolver.java:144)       在org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:116)       在org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:137)       在org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:109)       在org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:55)       在org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)       在org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:164)       在org.springframework.cloud.stream.binding.DispatchingStreamListenerMessageHandler.handleRequestMessage(DispatchingStreamListenerMessageHandler.java:87)       在org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)       在org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:157)       在org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)       在org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:132)       在org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105)       在org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)       在org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:463)       在org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:407)       在org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:181)       在org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:160)       在org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)       在org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:108)       在org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:203)       在org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access $ 300(KafkaMessageDrivenChannelAdapter.java:70)       at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter $ IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:387)       在org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter $ IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:364)       at org.springframework.kafka.listener.KafkaMessageListenerContainer $ ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1001)       at org.springframework.kafka.listener.KafkaMessageListenerContainer $ ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:981)       在org.springframework.kafka.listener.KafkaMessageListenerContainer $ ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:932)       在org.springframework.kafka.listener.KafkaMessageListenerContainer $ ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:801)       在org.springframework.kafka.listener.KafkaMessageListenerContainer $ ListenerConsumer.run(KafkaMessageListenerContainer.java:689)       at java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:511)       at java.util.concurrent.FutureTask.run(FutureTask.java:266)       在java.lang.Thread.run(Thread.java:745)

看起来原因是与邮件一起发送的contentType标头是text/plain,尽管它应该是application/json
生产者配置:  

spring:
  cloud:
      stream:
        kafka:
          binder:
            brokers: kafka
            defaultBrokerPort: 9092
            zkNodes: zookeeper
            defaultZkPort: 2181
            minPartitionCount: 2
            replicationFactor: 1
            autoCreateTopics: true
            autoAddPartitions: true
            headers: type,message_id
            requiredAcks: 1
            configuration:
              "[security.protocol]": PLAINTEXT #TODO: This is a workaround. Should be security.protocol
          bindings:
            user-output:
              producer:
                sync: true
                configuration:
                  retries: 10000
        default:
          binder: kafka
          contentType: application/json
          group: user-service
          consumer:
            maxAttempts: 1
          producer:
            partitionKeyExtractorClass: com.watercorp.user_service.messaging.PartitionKeyExtractor
        bindings:
          user-output:
            destination: user
            producer:
              partitionCount: 5
消费者配置:

spring:
  cloud:
      stream:
        kafka:
          binder:
            brokers: kafka
            defaultBrokerPort: 9092
            minPartitionCount: 2
            replicationFactor: 1
            autoCreateTopics: true
            autoAddPartitions: true
            headers: type,message_id
            requiredAcks: 1
            configuration:
              "[security.protocol]": PLAINTEXT #TODO: This is a workaround. Should be security.protocol
          bindings:
            user-input:
              consumer:
                autoRebalanceEnabled: true
                autoCommitOnError: true
                enableDlq: true            
        default:
          binder: kafka
          contentType: application/json
          group: enrollment-service
          consumer:
            maxAttempts: 1
            headerMode: embeddedHeaders
          producer:
            partitionKeyExtractorClass: com.watercorp.messaging.PartitionKeyExtractor
            headerMode: embeddedHeaders
        bindings:          
          user-input:
            destination: user
            consumer:
              concurrency: 5
              partitioned: true          

Consumer @StreamListener:

    @StreamListener(target = UserInput.INPUT, condition = "headers['type']=='" + USERS_DELETED + "'")
    public void handleUsersDeletedMessage(@Valid UsersDeletedMessage usersDeletedMessage, @Header(value = "kafka_receivedPartitionId",
            required = false) String partitionId, @Header(value = KAFKA_TOPIC_HEADER_NAME, required = false) String topic, @Header(MESSAGE_ID_HEADER_NAME) String messageId) throws Throwable {
        logger.info(String.format("Received users deleted message message, message id: %s topic: %s partition: %s", messageId, topic, partitionId));
        handleMessageWithRetry(_usersDeletedMessageHandler, usersDeletedMessage, messageId, topic);
    }

1 个答案:

答案 0 :(得分:2)

这是RC3中的一个错误; recently fixed on master;它将在本月底的GA版本中发布。在此期间,您可以尝试使用2.0.0.BUILD-SNAPSHOT吗?

我能够重现问题并使用快照为我修复了它......

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream</artifactId>
        <version>2.0.0.BUILD-SNAPSHOT</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream-binder-kafka</artifactId>
        <version>2.0.0.BUILD-SNAPSHOT</version>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
        <version>2.0.0.BUILD-SNAPSHOT</version>
    </dependency>

修改

为了完整性:

Ditmars制作人

@SpringBootApplication
@EnableBinding(Source.class)
public class So49409104Application {

    public static void main(String[] args) {
        SpringApplication.run(So49409104Application.class, args);
    }

    @Bean
    public ApplicationRunner runner(MessageChannel output) {
        return args -> {
            Foo foo = new Foo();
            foo.setBar("bar");
            output.send(new GenericMessage<>(foo));
        };
    }


    public static class Foo {

        private String bar;

        public String getBar() {
            return this.bar;
        }

        public void setBar(String bar) {
            this.bar = bar;
        }

        @Override
        public String toString() {
            return "Foo [bar=" + this.bar + "]";
        }

    }

}

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: so49409104a
          content-type: application/json
          producer:
            header-mode: embeddedHeaders
Elmhurst消费者:

@SpringBootApplication
@EnableBinding(Sink.class)
public class So494091041Application {

    public static void main(String[] args) {
        SpringApplication.run(So494091041Application.class, args);
    }

    @StreamListener(Sink.INPUT)
    public void listen(Foo foo) {
        System.out.println(foo);
    }

    public static class Foo {

        private String bar;

        public String getBar() {
            return this.bar;
        }

        public void setBar(String bar) {
            this.bar = bar;
        }

        @Override
        public String toString() {
            return "Foo [bar=" + this.bar + "]";
        }

    }

}

spring:
  cloud:
    stream:
      bindings:
        input:
          group: so49409104
          destination: so49409104a
          consumer:
            header-mode: embeddedHeaders
          content-type: application/json

结果:

Foo [bar=bar]

header-mode是必需的,因为现在Kafka本身支持标头,因此2.0中的默认值为native