Kafka Schema Registry错误:无法将Noop记录写入kafka商店

时间:2015-11-19 19:30:46

标签: apache-kafka

我正在尝试启动kafka架构注册表但遇到以下错误:无法将Noop记录写入kafka商店。堆栈跟踪如下。我检查了与zookeeper,kafka经纪人的联系 - 一切都很好。我可以发信息到kafka。我试图删除_schema主题,甚至重新安装kafka,但这个问题仍然存在。昨天一切都工作正常,但今天,重新启动我的流浪盒后,这个问题出现了。我能做些什么吗?感谢

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-11-19 19:12:26,535] INFO Initialized the consumer offset to -1        (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-11-19 19:12:27,167] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic.   (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-11-19 19:12:27,262] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-11-19 19:13:27,350] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at   io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more

3 个答案:

答案 0 :(得分:1)

错误消息具有误导性,正如其他开发人员在其他文章中所建议的那样,我建议以下内容。

1)确保Zookeeper正在运行。 (检查日志文件以及该进程是否处于活动状态)

2)确保kafka群集中的各个节点可以相互通信(通过telnet连接到主机和端口)

3)如果1和2都很好,那么我不建议创建另一个主题(如其他人在某些职位上推荐的_schema2),并使用以下命令更新schemaregistry配置文件kafkastore.topic新话题。
代替  3.1)停止进程(zookeeper,kafka服务器)  3.2)清理zookeeper数据目录中的数据  3.3)重新启动zookeeper,kafka服务器,最后重新启动schemaregistry服务(应该可以使用!)

P.S:如果您确实尝试创建另一个主题,那么当您尝试使用kafka主题中的数据时,您可能会陷入困境(发生在我身上,我花了几个小时才弄清楚这一点!!!)

答案 1 :(得分:0)

我得到了同样的错误。问题是我期望Kafka在ZooKeeper中使用kafka命名空间,所以我在schema-registry.properties

中设置它
kafkastore.connection.url=localhost:2181/kafka

但在卡夫卡server.properties我根本没有设置它。配置包含

zookeeper.connect=localhost:2181

所以我只是将ZooKeeper名称空间添加到此属性并重新启动Kafka

zookeeper.connect=localhost:2181/kafka

可能您的问题是您的架构注册表期望' /'命名空间,但在您的Kafka配置中,您定义了其他内容。你能发布Kafka配置吗?

或者您可以使用zkCli.sh查找ZooKeeper Kafka中存储主题信息的位置。

/bin/zkCli.sh localhost:2181
Welcome to ZooKeeper!
ls /kafka
[cluster, controller, controller_epoch, brokers, admin, isr_change_notification, consumers, latest_producer_id_block, config]

答案 2 :(得分:0)

我在对我有用的schema-registry.properties中进行了以下更改:

#kafkastore.connection.url=localhost:2181
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
kafkastore.topic=<topic name>

对于启动服务器的另一个问题,我还执行了以下命令:

./bin/kafka-topics --alter --zookeeper localhost:2181 --topic <topic name> --config cleanup.policy=compact

祝你好运!