以下针对弹性搜索连接器抛出异常:
[2018-05-07 11:40:38,975] ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
org.apache.kafka.connect.errors.DataException: de******ense
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:95)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:467)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
[2018-05-07 11:40:38,976] ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:173)
quickstart-elasticsearch.properties
的配置:
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=de******ense
key.ignore=true
compact.map.entries=false
connection.url=http://127.0.0.1:9197
type.name=kafka-connect
我正在传递key.ignore=true
,但它试图解析密钥。
来自WorkerSinkTask.java:467
SchemaAndValue keyAndSchema = keyConverter.toConnectData(msg.topic(), msg.key());
连接器尝试解析密钥但主题中没有密钥。
主题数据样本:
{"EXPENSE_CODE":{"string":"NL1230"},"EXPENSE_CODE_DESCRIPTION":{"string":"ABC Company"},"NO_OF_DEALS":{"long":7}}
{"EXPENSE_CODE":{"string":"NL1220"},"EXPENSE_CODE_DESCRIPTION":{"string":"XYZ Company"},"NO_OF_DEALS":{"long":308}}
{"EXPENSE_CODE":{"string":"NL1210"},"EXPENSE_CODE_DESCRIPTION":{"string":"Alberthijn - Amsterdam"},"NO_OF_DEALS":{"long":287}}
{"EXPENSE_CODE":{"string":"NL1200"},"EXPENSE_CODE_DESCRIPTION":{"string":"KLM - ADAM"},"NO_OF_DEALS":{"long":609}}
{"EXPENSE_CODE":{"string":"NL1240"},"EXPENSE_CODE_DESCRIPTION":{"string":"EXIDS- Global Limit"},"NO_OF_DEALS":{"long":9786}}
架构的注册表/ connect-avro-distributed.properties
bootstrap.servers=localhost:9192
#schema.registry.url=http://localhost:9193
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:9193
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:9193
# tried schema enable true as well for keys
key.converter.schemas.enable=false
value.converter.schemas.enable=true
config.storage.topic=connect-configs
offset.storage.topic=connect-offsets
status.storage.topic=connect-statuses
config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1
#offset.storage.partitions=25
#status.storage.partitions=5
internal.key.converter.schema.registry.url=http://localhost:9193
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter.schema.registry.url=http://localhost:9193
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
plugin.path=bin/../share/java # tried share/java as well
它指向正确的架构注册表URL。
答案 0 :(得分:1)
问题出在KSQL写入表或流时。它将键为String,值为Avro。
如果您更改配置如下所示,它将起作用
vi etc/schema-registry/connect-avro-distributed.properties
bootstrap.servers=lrv141rq:9192
group.id=connect-cluster
key.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schema.registry.url=http://localhost:9193
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:9193
key.converter.schemas.enable=false
value.converter.schemas.enable=true
config.storage.topic=connect-configs
offset.storage.topic=connect-offsets
status.storage.topic=connect-statuses
config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1
#offset.storage.partitions=25
#status.storage.partitions=5
internal.key.converter=org.apache.kafka.connect.storage.StringConverter
internal.key.converter.schema.registry.url=http://localhost:9193
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter.schema.registry.url=http://localhost:9193
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
plugin.path=bin/../share/java
vi etc/kafka-connect-elasticsearch/quickstart-elasticsearch.properties
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=deal-expense,emailfilters
key.ignore=true
compact.map.entries=false
connection.url=http://127.0.0.1:9197
type.name=kafka-connect
更改如下所示:
key.converter=org.apache.kafka.connect.storage.StringConverter