Kafka Connect(Confluent 5.0、4.1.2或3.0)无法启动

时间:2018-09-26 07:53:19

标签: apache-kafka apache-kafka-connect confluent

我们有一个启用了SSL的Kafka集群(作为第三方托管服务)。现在,我们正在尝试使用第三者接收器(WePay BigQuery连接器)设置Kafka Connect(Confluent 5.0)。在独立模式下启动Kafka connect时,一切都像一个超级按钮。不幸的是,启用分布式模式时,Kafka Connect突然失败,并显示以下内容:

[2018-09-25 15:01:46,248] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-09-25 15:01:46,248] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-09-25 15:01:46,667] INFO Kafka cluster ID: Q9PaAEeWSbOavVmHTQS5sA (org.apache.kafka.connect.util.ConnectUtils:59)
[2018-09-25 15:01:46,685] INFO Logging initialized @10512ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:193)
[2018-09-25 15:01:46,726] INFO Added connector for http://:8083 (org.apache.kafka.connect.runtime.rest.RestServer:119)
[2018-09-25 15:01:46,760] INFO Advertised URI: http://192.168.4.207:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2018-09-25 15:01:46,796] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-09-25 15:01:46,796] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser:110)
ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed:117)
java.lang.NoSuchMethodError: 
org.apache.kafka.common.metrics.Sensor.add 
(Lorg/apache/kafka/common/metrics/CompoundStat;)Z
at org.apache.kafka.connect.runtime.Worker$WorkerMetricsGroup.<init> . 
(Worker.java:731)
at org.apache.kafka.connect.runtime.Worker.<init>(Worker.java:112)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main 
(ConnectDistributed.java:88)

已针对具体错误尝试使用Google,但找不到任何内容。看起来好像是某个地方的版本问题(因此出现了NoSuchMethodError),但不知道从哪里开始。

与Confluent 4.1.2一起使用时,会出现另一个错误:

[2018-09-26 15:14:05,498] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:112)
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
    at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:144)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:182)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:159)
    at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:95)
Caused by: java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.<init>(Lorg/apache/kafka/common/utils/LogContext;Lorg/apache/kafka/clients/KafkaClient;Lorg/apache/kafka/clients/Metadata;Lorg/apache/kafka/common/utils/Time;JJI)V
    at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:114)
    ... 3 more

当我们使用相同但与Kafka Connect(Confluent 3.0)一起使用时,会出现不同的错误:

[2018-09-26 10:04:24,588] INFO AvroDataConfig values: 
    schemas.cache.config = 1000
    enhanced.avro.schema.support = false
    connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:169)
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.common.utils.AppInfoParser.unregisterAppInfo(Ljava/lang/String;Ljava/lang/String;)V
    at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.stop(WorkerGroupMember.java:194)
    at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:122)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:150)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:132)
    at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:82)

这是distributed.properties:

bootstrap.servers=*****
group.id=testGroup
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=****
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=****
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
config.storage.topic=connect-configs
offset.storage.topic=connect-offsets
status.storage.topic=connect-statuses
security.protocol=SSL
ssl.truststore.location=truststore.jks
ssl.truststore.password=****
ssl.keystore.type=PKCS12
ssl.keystore.location=keystore.p12
ssl.keystore.password=****
ssl.key.password=****
plugin.path=/*/confluent-5.0.0/share/java

还有供参考的standalone.properties:

bootstrap.servers=***
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=***
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=***
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offsets
consumer.security.protocol=SSL
consumer.ssl.truststore.location=truststore.jks
consumer.ssl.truststore.password=***
consumer.ssl.keystore.type=PKCS12
consumer.ssl.keystore.location=keystore.p12
consumer.ssl.keystore.password=***
consumer.ssl.key.password=***

任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

我只是发现您必须在kafka connect属性文件中为kafka客户端配置添加前缀: https://docs.confluent.io/current/connect/userguide.html#overriding-producer-and-consumer-settings

您的standalone.properties会为配置添加前缀消费者。

ORDER BY

但是您的distributed.properties没有:

consumer.security.protocol=SSL