我们正在观察使用Servicetest和嵌入式Kafka的奇怪行为。
测试是一个Spock测试,我们使用JUnit规则KafkaEmbedded并传播brokersAsString,如下所示:
@ClassRule
@Shared
KafkaEmbedded embeddedKafka = new KafkaEmbedded(1)
@Autowired
KafkaListenerEndpointRegistry endpointRegistry
def setupSpec() {
System.setProperty("kafka.bootstrapServers", embeddedKafka.getBrokersAsString())
}
从检查KafkaEmbedded代码开始,构建一个KafkaEmbedded(int count)
的实例会导致一个Kafka服务器,每个主题有两个分区。
为了解决测试中分区分配和服务器 - 客户端同步的问题,我们遵循spring-kafka的ContainerTestUtils类中的策略。
public static void waitForAssignment(KafkaMessageListenerContainer<String, String> container, int partitions)
throws Exception {
log.info(
"Waiting for " + container.getContainerProperties().getTopics() + " to connect to " + partitions + " " +
"partitions.")
int n = 0;
int count = 0;
while (n++ < 600 && count < partitions) {
count = 0;
container.getAssignedPartitions().each {
TopicPartition it ->
log.info(it.topic() + ":" + it.partition() + "; ")
}
if (container.getAssignedPartitions() != null) {
count = container.getAssignedPartitions().size();
}
if (count < partitions) {
Thread.sleep(100);
}
}
}
当我们观察日志时,我们会注意到以下模式:
2016-07-29 11:24:02.600 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 1 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.600 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 1 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.600 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 1 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.696 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 3 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.699 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 3 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.699 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 3 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.807 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 5 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.811 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 5 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.812 WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 5 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:03.544 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-07-29 11:24:03.544 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-07-29 11:24:03.544 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-07-29 11:24:03.602 INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : SyncGroup for group timeslot-service-group-06x failed due to coordinator rebalance, rejoining the group
2016-07-29 11:24:03.637 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[]
2016-07-29 11:24:03.637 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[]
2016-07-29 11:24:04.065 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[staggering-0]
2016-07-29 11:24:04.066 INFO 1160 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 50810 (http)
2016-07-29 11:24:04.073 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : Started AllocationsDeliveryZonesServiceSpec in 20.616 seconds (JVM running for 25.456)
2016-07-29 11:24:04.237 INFO 1160 --- [ main] org.eclipse.jetty.server.Server : jetty-9.2.17.v20160517
2016-07-29 11:24:04.265 INFO 1160 --- [ main] o.e.jetty.server.handler.ContextHandler : Started o.e.j.s.ServletContextHandler@6a8598e7{/__admin,null,AVAILABLE}
2016-07-29 11:24:04.270 INFO 1160 --- [ main] o.e.jetty.server.handler.ContextHandler : Started o.e.j.s.ServletContextHandler@104ea372{/,null,AVAILABLE}
2016-07-29 11:24:04.279 INFO 1160 --- [ main] o.eclipse.jetty.server.ServerConnector : Started ServerConnector@3c9b416a{HTTP/1.1}{0.0.0.0:50811}
2016-07-29 11:24:04.430 INFO 1160 --- [ main] o.eclipse.jetty.server.ServerConnector : Started ServerConnector@7c214597{SSL-http/1.1}{0.0.0.0:50812}
2016-07-29 11:24:04.430 INFO 1160 --- [ main] org.eclipse.jetty.server.Server : Started @25813ms
2016-07-29 11:24:04.632 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : waiting...
2016-07-29 11:24:04.662 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : Waiting for [moa] to connect to 2 partitions.^
2016-07-29 11:24:13.644 INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:13.644 INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:13.644 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-07-29 11:24:13.644 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-07-29 11:24:13.655 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[staggering-0]
2016-07-29 11:24:13.655 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[moa-0]
2016-07-29 11:24:13.655 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[deliveryZipCode_v1-0]
2016-07-29 11:24:13.740 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
[...]
2016-07-29 11:24:16.644 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:16.666 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[staggering-0]
2016-07-29 11:24:16.750 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
[...]
2016-07-29 11:24:23.559 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:23.660 INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:23.660 INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:23.662 INFO 1160 --- [ main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:23.686 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[moa-0]
2016-07-29 11:24:23.686 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[deliveryZipCode_v1-0]
2016-07-29 11:24:23.695 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[moa-0]
2016-07-29 11:24:23.695 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[staggering-0]
2016-07-29 11:24:23.695 INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[deliveryZipCode_v1-0]
请注意[..]指示省略了行
我们将metadata.max.age.ms
设置为3000毫秒
因此,它会尝试频繁刷新元数据信息。
现在让我们感到困惑的是,如果我们等待两个分区连接,等待将会超时。只有当我们等待一个分区连接时,一段时间后一切都会成功运行。
我们是否理解代码错误,嵌入式Kafka中每个主题有两个分区?是否只有一个被分配给我们的听众?
答案 0 :(得分:0)
我无法解释你所看到的片状;是的,每个主题默认都有2个分区。我刚刚运行了一个框架容器测试并看到了这个......
09:24:06.139 INFO [testSlow3-kafka-consumer-1][org.springframework.kafka.listener.KafkaMessageListenerContainer] partitions revoked:[]
09:24:06.611 INFO [testSlow3-kafka-consumer-1][org.springframework.kafka.listener.KafkaMessageListenerContainer] partitions assigned:[testTopic3-1, testTopic3-0]
答案 1 :(得分:0)
对于测试,设置spring.kafka.consumer.auto-offset-reset=earliest
以避免竞争状态(消费者与生产者的顺序或时间安排)很重要,请参见https://docs.spring.io/spring-kafka/reference/html/#junit
从2.5版开始,consumerProps方法将ConsumerConfig.AUTO_OFFSET_RESET_CONFIG设置为最早。这是因为,在大多数情况下,您希望使用者使用测试用例中发送的所有消息。 ConsumerConfig的默认值是最新的,这意味着在使用方启动之前,测试已经发送的消息将不会收到这些记录。若要还原为以前的行为,请在调用方法后将属性设置为Latest。