Spring App未使用SSL连接到Kafka

时间:2019-05-30 12:26:05

标签: spring ssl apache-kafka kafka-producer-api spring-kafka

我有一个带有非常简单的kafka生产者的Spring boot应用程序。如果我不加密就连接到kafka集群,一切都会很好。但是如果我尝试使用SSL连接到kafka集群,则会超时。生产者中是否需要其他配置或我需要定义的其他属性以允许spring正确使用所有配置?

我设置了以下属性

    spring.kafka.ssl.key-store-type=jks
    spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
    spring.kafka.ssl.trust-store-password=test1234
    spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
    spring.kafka.ssl.key-store-password=test1234
    logging.level.org.apache.kafka=debug
    server.ssl.key-password=test1234
    spring.kafka.ssl.key-password=test1234
    spring.kafka.producer.client-id=sym
    spring.kafka.admin.ssl.protocol=ssl

应用启动时,将以下结果打印为ProducerConfig

    o.a.k.clients.producer.ProducerConfig    : ProducerConfig values:
    acks = 1
    batch.size = 16384
    bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = sym
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = [hidden]
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = /home/ec2-user/keystore.jks
    ssl.keystore.password = [hidden]
    ssl.keystore.type = jks
    ssl.protocol = ssl
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = /home/ec2-user/truststore.jks
    ssl.truststore.password = [hidden]
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

我的制作人非常简单:

    @Service
    public class Producer {

        private final KafkaTemplate<String, String> kafkaTemplate;

        public Producer(KafkaTemplate<String, String> kafkaTemplate) {
            this.kafkaTemplate = kafkaTemplate;
        }

        void sendMessage(String topic, String message) {
            this.kafkaTemplate.send(topic, message);
        }

        void sendMessage(String topic, String key, String message) {
            this.kafkaTemplate.send(topic, key, message);
        }
    }

使用SSL连接到kafka会收到一个TimeoutException,提示Topic symbols not present in metadata after 60000 ms. 如果打开调试日志,则会反复得到此消息,从而循环所有经纪人。

2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initiating API versions fetch from node -4.
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
    2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.bytes-sent
    2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.bytes-received
    2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.latency
    2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector   : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
    2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector   : [Producer clientId=sym] Connection with /10.25.75.151 disconnected

    java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
        at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]

    2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Node -1 disconnected.
    2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.

1 个答案:

答案 0 :(得分:1)

在生产者配置中,security.protocol应该设置为SSL。您也可以尝试设置ssl.endpoint.identification.algirithm =“”以禁用证书的主机名验证,以防出现问题。除此之外,查看Kafka代理配置将很有用。