当我尝试创建s3源连接器时,出现错误无法实例化Partitioner类。做了什么:
已安装confluent-hub
和confluentinc/kafka-connect-s3-source
,已导出CLASSHPATH。 (最新版本是1.0.1)
$ confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:1.0.1
$ export CLASSPATH=/connector/share/confluent-hub-components/confluentinc-kafka-connect-s3-source/lib/*
连接器设置是文档(connector.properties)中的默认设置
name=s3-source
tasks.max=1
connector.class=io.confluent.connect.s3.source.S3SourceConnector
s3.bucket.name=confluent-kafka-connect-s3-testing
format.class=io.confluent.connect.s3.format.avro.AvroFormat
confluent.license=
confluent.topic.bootstrap.servers=kafka:9092
confluent.topic.replication.factor=1
transforms=AddPrefix
transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.AddPrefix.regex=.*
transforms.AddPrefix.replacement=copy_of_$0
详细错误
$ connect-standalone.sh worker.properties connector.properties
[2019-10-16 12:36:02,410] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser:117)
[2019-10-16 12:36:02,411] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser:118)
[2019-10-16 12:36:02,412] INFO Kafka startTimeMs: 1571229362410 (org.apache.kafka.common.utils.AppInfoParser:119)
[2019-10-16 12:36:02,675] INFO License for single cluster, single node (io.confluent.license.LicenseManager:417)
[2019-10-16 12:36:02,683] INFO Closing License Store (io.confluent.license.LicenseStore:197)
[2019-10-16 12:36:02,683] INFO Stopping KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:164)
[2019-10-16 12:36:02,686] INFO [Producer clientId=s3-source-license-manager] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
[2019-10-16 12:36:02,701] INFO Stopped KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:190)
[2019-10-16 12:36:02,702] INFO Closed License Store (io.confluent.license.LicenseStore:199)
[2019-10-16 12:36:02,704] ERROR WorkerConnector{id=s3-source} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:119)
org.apache.kafka.connect.errors.ConnectException: Failed to instantiate Partitioner class
at io.confluent.connect.s3.source.S3SourceConnectorConfig.getPartitioner(S3SourceConnectorConfig.java:612)
at io.confluent.connect.s3.source.S3SourceConnector.doStart(S3SourceConnector.java:94)
at io.confluent.connect.s3.source.S3SourceConnector.start(S3SourceConnector.java:86)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:252)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:293)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:209)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:115)
我不熟悉Java,但是现在我试图研究jars中的源代码。任何帮助都会有所帮助,并在此先感谢。
答案 0 :(得分:0)
听起来您的安装有点麻烦。要在Docker下运行Kafka Connect,您应该使用专用映像,例如confluentinc/cp-kafka-connect
。
要查看与Docker一起部署的Kafka Connect的示例,请查看http://rmoff.dev/bbuzz19_demo-code以及随附的talk和slides