具有Avro消息的Spring Cloud Stream JSON死信队列

时间:2019-08-09 10:59:56

标签: avro spring-kafka spring-cloud-stream confluent-schema-registry

我正在将Spring Cloud Stream与Avro和Confluent Schema Registry结合使用。我对所有服务都使用一个DLQ主题,因此具有不同架构的消息可能会落在该主题中。我已禁用动态架构注册,以确保不传递不正确的消息(schemaspring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled= false)。

但是,问题是由于dlq上缺少架构,因此进入该主题可能会丢失一条消息。因此,我希望能够将JSON格式的消息生成到dlq,并在其余的管道中使用Avro。如果有人可以帮助我如何实现这一目标,或者可以为我提供有关此事的示例,我将不胜感激。

1 个答案:

答案 0 :(得分:1)

如果您使用Stream 2.1或更高版本,请在绑定器中禁用DLQ处理,并使用ListenerContainerCustomizer bean将自定义ErrorHandler添加到侦听器容器;您可以将SeekToCurrentErrorHandler与自定义恢复程序一起使用-您可以将DeadLetterPublishingRecoverer用作起点-覆盖此方法...

/**
 * Subclasses can override this method to customize the producer record to send to the
 * DLQ. The default implementation simply copies the key and value from the consumer
 * record and adds the headers. The timestamp is not set (the original timestamp is in
 * one of the headers). IMPORTANT: if the partition in the {@link TopicPartition} is
 * less than 0, it must be set to null in the {@link ProducerRecord}.
 * @param record the failed record
 * @param topicPartition the {@link TopicPartition} returned by the destination
 * resolver.
 * @param headers the headers - original record headers plus DLT headers.
 * @param data the value to use instead of the consumer record value.
 * @param isKey true if key deserialization failed.
 * @return the producer record to send.
 * @see KafkaHeaders
 */
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
        TopicPartition topicPartition, RecordHeaders headers, @Nullable byte[] data, boolean isKey) {