架构注册表拒绝未更改的架构为不兼容

时间:2019-01-25 15:47:45

标签: apache-kafka avro apache-kafka-streams confluent-schema-registry

我们有一个kafka集群,该集群以存储在Confluent的Schema-registry中的Avro架构运行。在最近重新部署我们的流应用程序(之一)后,我们开始在单个主题(EmailSent)上看到不兼容的架构错误。这是唯一失败的主题,只要将新的EmailSent事件提交给该主题,我们都会收到错误消息。

  

由以下原因引起:org.apache.kafka.common.errors.SerializationException:注册Avro模式时出错:{“ type”:“ record”,“ name”:“ EmailSent”,“ namespace”:“ com.company_name.communications .schemas“,” fields“:[{” name“:” customerId“,” type“:” long“,” doc“:”客户服务中的客户ID“},{” name“:” messageId“,” type“:” long“,” doc“:”已发送电子邮件的消息ID“},{” name“:” sentTime“,” type“:{” type“:” string“,” avro.java.string “:” String“},” doc“:”广告系列以'yyyy-MM-dd HH:mm:ss.SSS'“},{” name“:” campaignId“,” type“:” long “,” doc“:”营销套件中广告系列的ID“},{” name“:” appId“,” type“:[” null“,” long“],” doc“:”应用ID与发送的电子邮件相关联,如果该电子邮件与特定的应用程序有关,“,”默认“:null}],”版本“:1}   引起原因:io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:正在注册的架构与早期的架构不兼容;错误代码:409;错误代码:409       在io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:170)       在io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:187)       在io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:238)       在io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:230)       在io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:225)       在io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:59)       在io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:91)       在io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:72)       在io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:54)       在       org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:91)       在org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:78)       在org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:87)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.kstream.internals.KStreamFilter $ KStreamFilterProcessor.process(KStreamFilter.java:43)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.kstream.internals.KStreamMap $ KStreamMapProcessor.process(KStreamMap.java:42)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.kstream.internals.KStreamPeek $ KStreamPeekProcessor.process(KStreamPeek.java:44)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.kstream.internals.KStreamMapValues $ KStreamMapProcessor.process(KStreamMapValues.java:41)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.kstream.internals.ForwardingCacheFlushListener.apply(ForwardingCacheFlushListener.java:42)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:92)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.access $ 000(CachingKeyValueStore.java:35)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore $ 1.apply(CachingKeyValueStore.java:79)       在org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141)       在org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:232)       在org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:245)       在org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:153)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.putInternal(CachingKeyValueStore.java:193)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:188)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:35)       在org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:199)       在org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:121)       在org.apache.kafka.streams.kstream.internals.KTableSource $ KTableSourceProcessor.process(KTableSource.java:63)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)       在org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)       在org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:222)       在org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:409)       在org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:308)       在org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:939)       在org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:819)       在org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:771)       在org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:741)

此架构自2018年6月以来一直保持不变,到目前为止,我们已经成功处理了EmailSent事件。

与我们的Streams App的部署相关的PR不会更改架构,Streams处理器抛出错误或任何Streams应用程序的依赖项。我的怀疑在于模式注册,是否有人对类似的事情有任何经验或对导致失败的原因有深刻的了解?我找不到关于错误代码409的任何信息,这会给任何人敲响钟声吗?

谢谢。

1 个答案:

答案 0 :(得分:0)

我认为服务器不会说谎。您尚未显示供我们比较的两种模式(注册表中的一种与错误消息中的一种)。

解决该问题的一种方法是将配置设置为“无”兼容性,

export KAFKA_TOPIC=logEvents
curl -X PUT http://schema-registry:8081/config/${KAFKA_TOPIC}-value -d '{"compatibility": "NONE"}' -H "Content-Type:application/json"

(如果需要,请对${KAFKA_TOPIC}-key进行同样的操作

然后上传您的新架构。

但是

  1. 完成后将其设置回向后兼容(或原始配置)
  2. 这可能会破坏Avro使用者读取来自旧模式和新的不兼容模式的消息。