使用Kafka-Connect通过SSL连接到MSK时出现问题

时间:2020-03-12 21:25:50

标签: apache-kafka apache-kafka-connect aws-msk

我无法在Confluent Kafka-Connect映像中使用AWS MSK TLS终端节点,因为它超时创建/读取主题。通过PlainText端点时,效果很好。

我尝试引用docker映像上可用的jks存储路径仍然无法正常工作,不太确定我是否缺少任何其他配置。因此,从我从AWS文档中读取的内容来看,Amazon MSK代理使用公共AWS Certificate Manager证书,任何信任Amazon Trust Services的信任库也都信任Amazon MSK代理的证书。

**Error:**
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create.

附加我正在使用任何帮助的kafka-connect配置会很棒:)

INFO org.apache.kafka.clients.admin.AdminClientConfig-AdminClientConfig值:

bootstrap.servers = [**.us-east-1.amazonaws.com:9094,*.us-east-1.amazonaws.com:9094]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = JKSStorePath
ssl.truststore.password = ***
ssl.truststore.type = JKS

1 个答案:

答案 0 :(得分:1)

我在/ usr / lib / jvm / zulu-8-amd64 / jre / lib / security / cacerts的docker映像中使用了Java cacerts作为信任库。使用keytool,如果您查看证书:

keytool --list -v -keystore /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts|grep Amazon

它将列出Amazon CA。

然后我使用以下命令启动容器:

docker run -d \
  --name=kafka-connect-avro-ssl \
  --net=host \
  -e CONNECT_BOOTSTRAP_SERVERS=<msk_broker1>:9094,<msk_broker2>:9094,<msk_broker3>:9094 \
  -e CONNECT_REST_PORT=28083 \
  -e CONNECT_GROUP_ID="quickstart-avro" \
  -e CONNECT_CONFIG_STORAGE_TOPIC="avro-config" \
  -e CONNECT_OFFSET_STORAGE_TOPIC="avro-offsets" \
  -e CONNECT_STATUS_STORAGE_TOPIC="avro-status" \
  -e CONNECT_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter" \
  -e CONNECT_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter" \
  -e CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL="<hostname of EC2 instance>:8081" \
  -e CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL="http://<hostname of EC2 instance>:8081" \
  -e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
  -e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
  -e CONNECT_REST_ADVERTISED_HOST_NAME="<hostname of EC2 instance>" \
  -e CONNECT_LOG4J_ROOT_LOGLEVEL=DEBUG \
  -e CONNECT_SECURITY_PROTOCOL=SSL \
  -e CONNECT_SSL_TRUSTSTORE_LOCATION=/usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts \
  -e CONNECT_SSL_TRUSTSTORE_PASSWORD=changeit \
  confluentinc/cp-kafka-connect:latest

这样,它成功启动了。我还能够连接到容器,创建主题,在容器内生产和消费。如果您无法创建主题,则可能是网络连接问题,可能是连接到MSK群集的安全组的安全组问题,从而阻止了端口2181和TLS端口9094。