我在AWS中设置了MSK群集,并在同一vpn中创建了一个EC2实例。
我尝试了kafka-console-consumer.sh和kafka-console-producer.sh,它工作正常。我能够在消费者中看到生产者发送的消息
1)我已经下载了s3连接器(https://docs.confluent.io/current/connect/kafka-connect-s3/index.html)
2)将文件解压缩到/ home / ec2-user / plugins /
3)使用以下内容创建了connect-standalone.properties
bootstrap.servers=<my brokers>
plugin.path=/home/ec2-user/kafka-plugins
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
4)使用以下内容创建了s3-sink.properties。
name=s3-sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=<My Topic>
s3.region=us-east-1
s3.bucket.name=vk-ingestion-dev
s3.part.size=5242880
flush.size=1
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
当我使用上述两个prop文件运行connect-standlone.sh时,它正在等待一段时间并引发以下错误。
[AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:237)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2019-10-22 19:28:36,789] INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:237)
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
[2019-10-22 19:28:36,796] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:124)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:81)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
... 2 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
我需要寻找任何安全事项吗?
答案 0 :(得分:1)
添加以下ssl配置后,它可以正常工作。
security.protocol=SSL
ssl.truststore.location=/tmp/kafka.client.truststore.jks
添加以上参数后,连接器启动没有错误,但是数据没有上传到s3。
分别添加生产者和消费者配置参数的工作。
示例:
producer.security.protocol=SSL
producer.ssl.truststore.location=/tmp/kafka.client.truststore.jks
consumer.security.protocol=SSL
consumer.ssl.truststore.location=/tmp/kafka.client.truststore.jks