我在Spring Kafka https://github.com/trajano/spring-kafka-stream-example
中实现了一个相对简单的一劳永逸的查询系统当前的行为是
我需要这个问题的答案,谁先回答,请告诉我,我会相信的。
我想稍微改变一下行为
我需要这个问题的答案,谁能先回答并通过我的内部测试条件,请告诉我,我会信任它。
但是,我看不到ReplyingKafkaTemplate中可以做的任何事情。从API文档中,我认为我可能必须扩展此类以某种方式添加该逻辑。
我的猜测是覆盖onMessage()
,但这将是下一行之前的副本
RequestReplyFuture<K, V, R> future = this.futures.remove(correlationId);
添加消费者记录检查。
答案 0 :(得分:3)
ReplyingKafkaTemplate
仅针对每个请求提供一个答复;其他答复将被丢弃。
对于这种情况,我们在2.3中添加了AggregatingReplyingKafkaTemplate
-等待多次答复或超时。
这是一个测试用例...
@KafkaListener(id = "def1", topics = { D_REQUEST, E_REQUEST, F_REQUEST })
@SendTo // default REPLY_TOPIC header
public String dListener1(String in) {
return in.toUpperCase();
}
@KafkaListener(id = "def2", topics = { D_REQUEST, E_REQUEST, F_REQUEST })
@SendTo // default REPLY_TOPIC header
public String dListener2(String in) {
return in.substring(0, 1) + in.substring(1).toUpperCase();
}
和
@Test
public void testAggregateNormal() throws Exception {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 2);
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(30));
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, "foo");
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(10, TimeUnit.SECONDS); // send ok
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord =
future.get(30, TimeUnit.SECONDS);
assertThat(consumerRecord.value().size()).isEqualTo(2);
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
String value1 = iterator.next().value();
assertThat(value1).isIn("fOO", "FOO");
String value2 = iterator.next().value();
assertThat(value2).isIn("fOO", "FOO");
assertThat(value2).isNotSameAs(value1);
assertThat(consumerRecord.topic()).isEqualTo(AggregatingReplyingKafkaTemplate.AGGREGATED_RESULTS_TOPIC);
}
finally {
template.stop();
template.destroy();
}
}
答案 1 :(得分:1)
由于我仍在使用Spring Cloud Greenwich.SR3,它没有Spring Boot 2.2,也没有Spring Kafka 2.3。我做了以下作为止步空白
package net.trajano.springkafka.foo;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.GenericMessageListenerContainer;
import org.springframework.kafka.requestreply.ReplyingKafkaTemplate;
import org.springframework.kafka.support.KafkaHeaders;
import java.util.List;
import java.util.function.BiPredicate;
import java.util.stream.Collectors;
/**
* This is a {@link ReplyingKafkaTemplate} that adds a simple validation semantic so it can take multiple responses and
* choose the first one that matches the validation condition.
* <p>
* The use case for this would be providing a farm of topic responders which are decoupled from the calling service and
* the calling service does not know who would respond and when, but knows some property of the response to consider it
* <em>valid</em>.
* <p>
* This can be explained using a dinner party analogy:
* <ol>
* <li>0:00 Kid: Does anyone know what the answers are to the square root of 144 and 2+2?
* <li>0:01 Uncle 1: 13, 5
* <li>0:02 Uncle 2: 12, 4
* <li>0:05 Kid: okay I gathered a few answers,
* <li>0:05 Kid: Filter out who can't answer 2+2
* <li>0:05 Kid: The proper answer is 12, 4
* <li>0:06 Uncle 3: 12, 4
* <li>0:06 Kid: Sorry uncle 3 you're too slow, so I am ignoring you
* </ul>
*
* @param K key
* @param V request value
* @param R response value
*/
public class ValidatingReplyingKafkaTemplate<K, V, R> extends ReplyingKafkaTemplate<K, V, R> {
private final BiPredicate<K, R> validationPredicate;
public ValidatingReplyingKafkaTemplate(ProducerFactory<K, V> producerFactory,
GenericMessageListenerContainer<K, R> replyContainer,
BiPredicate<K, R> validationPredicate) {
super(producerFactory, replyContainer);
this.validationPredicate = validationPredicate;
}
public ValidatingReplyingKafkaTemplate(ProducerFactory<K, V> producerFactory, GenericMessageListenerContainer<K, R> replyContainer, boolean autoFlush,
BiPredicate<K, R> validationPredicate) {
super(producerFactory, replyContainer, autoFlush);
this.validationPredicate = validationPredicate;
}
/**
* Filter out records that do not pass the validation predicate.
* <p>
* This does an initial filter to make sure only the ones with a correlation ID defined is processed. This does
* <b>not</b> check whether the correlation ID is something that needs to be considered as {@code futures} is not
* accessible and it is relying on the super class to perform the extra test.
*/
@Override
public void onMessage(List<ConsumerRecord<K, R>> data) {
super.onMessage(data.stream()
.filter(record -> record.headers().lastHeader(KafkaHeaders.CORRELATION_ID) != null)
.filter(record -> validationPredicate.test(record.key(), record.value()))
.collect(Collectors.toList()));
}
}
https://github.com/trajano/spring-kafka-stream-example中的用法示例