启动新的基于Kafka Connect SSL的连接器时出现问题

时间:2020-11-11 10:46:05

标签: apache-kafka apache-kafka-connect

我正在尝试在我们的KafkaConnect集群上设置一个新的ElasticSearchSink-job。群集通过与Kafka的SASL-SSL安全连接以及到主机A上的Elastic-instance的HTTPS的安全连接,已经顺利运行了几个月。我还使用docker(基于Confluent的KC-image v6.0.0的图像)在本地运行它,Kafka驻留在测试环境中,并且使用REST调用开始工作。

用于在本地运行它的docker-composed-file看起来像这样

version: '3.7'
services:
  connect:
    build:
      dockerfile: Dockerfile.local
      context: ./
    container_name: kafka-connect
    ports:
      - "8083:8083"
    environment:
      KAFKA_OPTS: -Djava.security.krb5.conf=/<path-to>/secrets/krb5.conf 
                  -Djava.security.auth.login.config=/<path-to>/rest-basicauth-jaas.conf
      CONNECT_BOOTSTRAP_SERVERS: <KAFKA-INSTANCE-1>:2181,<KAFKA-INSTANCE-2>:2181,<KAFKA-INSTANCE-3>:2181
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect
      CONNECT_REST_PORT: 8083
      CONNECT_REST_EXTENSION_CLASSES: org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
      CONNECT_GROUP_ID: <kc-group>
      CONNECT_CONFIG_STORAGE_TOPIC: service-assurance.test.internal.connect.configs
      CONNECT_OFFSET_STORAGE_TOPIC: service-assurance.test.internal.connect.offsets
      CONNECT_STATUS_STORAGE_TOPIC: service-assurance.test.internal.connect.status
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.converters.IntegerConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_ZOOKEEPER_CONNECT: <KAFKA-INSTANCE-1>:2181,<KAFKA-INSTANCE-2>:2181,<KAFKA-INSTANCE-3>:2181
      CONNECT_SECURITY_PROTOCOL: SASL_SSL
      CONNECT_SASL_KERBEROS_SERVICE_NAME: "kafka"
      CONNECT_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                                useKeyTab=true \
                                storeKey=true \
                                keyTab="/<path-to>/kafka-connect.keytab" \
                                principal="<AD-USER>";
      CONNECT_SASL_MECHANISM: GSSAPI
      CONNECT_SSL_TRUSTSTORE_LOCATION: "/<path-to>/truststore.jks"
      CONNECT_SSL_TRUSTSTORE_PASSWORD: <pwd>
      CONNECT_CONSUMER_SECURITY_PROTOCOL: SASL_SSL
      CONNECT_CONSUMER_SASL_KERBEROS_SERVICE_NAME: "kafka"
      CONNECT_CONSUMER_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                                useKeyTab=true \
                                storeKey=true \
                                keyTab="/<path-to>/kafka-connect.keytab" \
                                principal="<AD-USER>";
      CONNECT_CONSUMER_SASL_MECHANISM: GSSAPI
      CONNECT_CONSUMER_SSL_TRUSTSTORE_LOCATION: "/<path-to>/truststore.jks"
      CONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD: <pwd>
      CONNECT_PLUGIN_PATH: "/usr/share/java,/etc/kafka-connect/jars"

具有类似的kuberneted配置。

使用类似以下内容启动连接器:

curl  -X POST -H "Content-Type: application/json" --data '{
  "name": "connector-name",
  "config": {
    "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
    "tasks.max": 2,
    "batch.size": 200,
    "max.buffered.records": 1500,
    "flush.timeout.ms": 120000,
    "topics": "topic.connector",
    "auto.create.indices.at.start": false,
    "key.ignore": true,
    "value.converter.schemas.enable": false,
    "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
    "schema.ignore": true,
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "behavior.on.malformed.documents" : "ignore",
    "behavior.on.null.values": "ignore",
    "connection.url": "https://<elastic-host>",
    "connection.username": "<user>",
    "connection.password": "<pwd>",
    "type.name": "_doc"
  }
}' <host>/connectors/

现在,我受命设置另一个连接器,这次托管在主机B上。我遇到的问题是臭名昭著:

    sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

我修改了工作信任库,以同时包含主机B的CA-Root证书。我相信信任库可以正常工作,因为我可以从Java代码段(实际上在Atlassian页上的SSLPoke.class中找到)使用它来成功连接到A和B。

连接到主机A的连接器仍然可以与新更新的信任库一起使用,但不能连接到主机B的连接器。

我已扫描到互联网,以获取有关如何解决此问题的线索,并遇到了明确添加的建议:

"elastic.https.ssl.truststore.location": "/<pathto>/truststore.jks",
"elastic.https.ssl.truststore.password": "<pwd>",

连接器配置。其他一些页面建议这样将信任库添加到KC配置KAFKA_OPTS:

  KAFKA_OPTS: -Djava.security.krb5.conf=/<path-to>/secrets/krb5.conf 
              -Djava.security.auth.login.config=/<path-to>/rest-basicauth-jaas.conf
              -Djavax.net.ssl.trustStore=/<path-to>/truststore.jks

按照这些建议,我实际上可以使连接器连接到主机B,以成功启动。但是现在出现了烦人的部分。向KAFKA_OPTS添加额外的参数后,我连接到A的旧连接器将停止工作! -完全一样的错误!因此,现在我有一种情况,要么连接器连接到A,要么连接器连接到B,但不能同时工作。

请,如果有人能给我一些解决方法的建议或想法,将不胜感激,因为这使我发疯。

0 个答案:

没有答案