我正在使用以下docker compose片段:
connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- zookeeper
- kafka
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_PLUGIN_PATH: /usr/share/java
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
容器似乎可以正常启动,但是当我尝试通过连接容器REST API添加HDFS接收器连接时:
curl -s -X POST -H 'Content-Type: application/json' --data \
@confluent_hdfs.json http://localhost:8083/connectors
confluent_hdfs.json文件包含以下内容:
{
"name": "hdfs-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "test",
"hdfs.url": "hdfs://localhost:9000",
"flush.size": "1000",
"name": "hdfs-sink"
}
}
我收到500个HTTP响应。检查连接器容器的日志显示:
WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError:
io/confluent/connect/hdfs/HdfsSinkConnectorConfig
通过检查此问题,我看到以下帖子:
https://github.com/confluentinc/kafka-connect-hdfs/issues/273
这暗示插件路径错误。据我所知,不过我已经将其正确设置为/ usr / share / java,并且我还看到了本文所引用的正确配置的符号链接。
进一步,在执行请求时:
curl http://localhost:8083/connector-plugins
我看到以下答复:
[
{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},
{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}
]
所以我不确定是否丢失了撰写文件中的内容,或者我是否还缺少其他内容?
答案 0 :(得分:1)
多亏了dawsaw,我研究了您建议的示例,但我意识到问题出在我通过将连接器文件夹作为卷安装而安装的连接器插件。不幸的是,我将连接器安装在连接容器的错误部分,这似乎在损害容器正确运行的能力。
最后我要做的是:
connect:
image: confluentinc/cp-kafka-connect:4.1.1
container_name: connect
restart: always
ports:
- "8083:8083"
depends_on:
- zookeeper
- kafka
volumes:
- $PWD/confluentinc-kafka-connect-rabbitmq-1.0.0-preview:/usr/share/java/confluentinc-kafka-connect-rabbitmq-1.0.0-preview
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 1
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_PLUGIN_PATH: "/usr/share/java"
再次感谢您的帮助,并对最初创建的不良示例代码表示歉意。