Spring Cloud Stream> SendTo不发送到Kafka,而是直接通过直接渠道发送

时间:2018-10-18 07:45:26

标签: apache-kafka spring-cloud-stream

我的应用程序中有两个频道与两个Kafka主题绑定:

  1. 输入
  2. error.input.my-group

配置输入是为了在出现错误的情况下将消息发送到dlq(error.input.my-group)。

我在“ error.input.my-group”上有一个StreamListener,它被配置为将消息发送回原始通道。

@StreamListener(Channels.DLQ)
@SendTo(Channels.INPUT)
public Message<?> reRoute(Message<?> failed){
    messageDeliveryService.waitUntilCanBeDelivered(failed);
    processed.incrementAndGet();
    Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);
    retries = retries == null ? 1 : retries+1;
     if (retries < MAX_RETRIES) {
        logger.info("Retry (count={}) for {}", retries, failed);
        return buildRetryMessage(failed, retries);
    }
    else {
        logger.error("Retries exhausted (-> sent to parking lot) for {}", failed);
        Channels.parkingLot().send(MessageBuilder.fromMessage(failed)
                .setHeader(BinderHeaders.PARTITION_OVERRIDE,
                        failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
                .build());
    }
    return null;
}

private Message<?> buildRetryMessage(Message<?> failed, int retries) {
    return MessageBuilder.fromMessage(failed)
            .setHeader(X_RETRIES_HEADER, retries)
            .setHeader(BinderHeaders.PARTITION_OVERRIDE,
                    failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
            .build();
}

这是我的频道课程

        @Component
    public interface Channels {

        String INPUT = "INPUT";
        //Default name use by SCS (error.<input-topic-name>.<group-name>)
        String DLQ = "error.input.my-group";
        String PARKING_LOT = "parkingLot.input.my-group";

        @Input(INPUT)
        SubscribableChannel input();

        @Input(DLQ)
        SubscribableChannel dlq();

        @Output(PARKING_LOT)
        MessageChannel parkingLot();


}

这是我的配置

spring:
  cloud:
    stream:
      default:
        group: my-group
      binder:
        headerMode: headers      kafka:
        binder:
          # Necessary in order to commit the message to all the Kafka brokers handling the partition -> maximum durability
          # -1 = all
          requiredAcks: -1
          brokers: bootstrap.kafka.svc.cluster.local:9092,bootstrap.kafka.svc.cluster.local:9093,bootstrap.kafka.svc.cluster.local:9094,bootstrap.kafka.svc.cluster.local:9095,bootstrap.kafka.svc.cluster.local:9096,bootstrap.kafka.svc.cluster.local:9097
        bindings:
          input:
            consumer:
              partitioned: true
              enableDlq: true
              dlqProducerProperties:
                configuration:
                  key.serializer: "org.apache.kafka.common.serialization.ByteArraySerializer"
          "[error.input.my-group]":
            consumer:
              # We cannot loose any message and we don't have any DLQ for the DLQ, therefore we only commit in case of success
              autoCommitOnError: false
              ackEachRecord: true
              partitioned: true
              enableDlq: false
      bindings:
        input:
          contentType: application/xml
          destination: input
        "[error.input.my-group]":
          contentType: application/xml
          destination: error.input.my-group
        "[parkingLot.input.my-group]":
          contentType: application/xml
          destination: parkingLot.input.my-group

问题是我的消息再也不会发送到Kafka,而是直接发送到我的输入渠道。我有误会吗?

1 个答案:

答案 0 :(得分:1)

要{@ {1}}而不是直接到kafka目标,需要输出绑定。