有些日子以来,我一直在玩融合版的kafka,以便更好地了解平台。我收到了一些发送到一个主题的格式错误的avro消息的序列化异常。让我用事实来解释这个问题:
<kafka.new.version>0.10.2.0-cp1</kafka.new.version>
<confluent.version>3.2.0</confluent.version>
<avro.version>1.7.7</avro.version>
意图:非常简单,Producer正在发送Avro记录,而Consumer应该没有任何问题地使用所有记录(它可以使所有消息与模式注册表中的模式不兼容。) 用法:
Producer ->
Key -> StringSerializer
Value -> KafkaAvroSerializer
Consumer ->
Key -> StringDeserializer
Value -> KafkaAvroDeserializer
其他消费者物业(仅供参考):
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "somehost:9092");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "myconsumer-4");
properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "someclient-4");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
properties.put("schema.registry.url", "schemaregistryhost:8081");
我能够毫无问题地使用消息,直到其他一些制作人错误地向此主题发送一个消息并修改了架构注册表中的最新架构。 (我们在架构注册表中启用了一个选项,因此您可以向主题和架构注册表发送任何消息,每次都会生成新版本的架构,如果关闭,我们也可以切换。)
现在,由于此一个错误消息,poll()因序列化问题而失败。它确实给了我失败的偏移量,我可以通过使用seek()来传递偏移,但这听起来并不好。我也尝试使用max poll records to 10和poll()timeout to very small,这样我就可以通过捕获Exception来忽略max 10记录,但由于某种原因,max-records不起作用,代码失败并立即出现序列化错误,即使我从开头和坏消息是240偏移。
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "10");
另一个简单的解决方案是使用ByteArrayDeserializer并在我的应用程序中使用KafkaAvroDecoder,我可以处理反序列化问题。
我相信我遗失或做错了。添加例外:
Exception in thread "main" org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition topic.ongo.test3.user14-0 at offset 220
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 186
Caused by: org.apache.avro.AvroTypeException: Found com.catapult.TestUser, expecting com.catapult.TestUser, missing required field testname
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:292)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.ResolvingDecoder.readFieldOrder(ResolvingDecoder.java:130)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:176)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:131)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:92)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:54)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:869)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:775)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:473)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1062)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)