自定义Java生产者中的Kafka SSL握手失败

时间:2020-08-04 09:29:30

标签: java ssl apache-kafka

尝试使用我的Kafka生产者应用程序生成一些数据,但出现以下错误:

[SocketServer brokerId = 0]使用localhost / 127.0.0.1的身份验证失败(SSL握手失败)(org.apache.kafka.common.network.Selector)

我使用带有PLAIN机制的SASL_SSL协议与Kafka通信。当我使用kafka-console-producer时 sh kafka-console-producer.sh --broker-list localhost:9093 --topic kafka-topic --producer.config ../config/producer.properties

和kafka-console-consumer sh kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic kafka-topic --consumer.config ../config/consumer.properties

一切正常。这是我的 server.properties 的一部分:

listeners=PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093
advertised.listeners=PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093

listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
   username="admin" \
   password="admin-secret" \
   user_admin="admin-secret";

sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

allow.everyone.if.no.acl.found=true

ssl.keystore.location=/mnt/data/kafka/config/keystore/kafka.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=/mnt/data/kafka/config/truststore/kafka.truststore.jks
ssl.truststore.password=password

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG

producer.properties

bootstrap.servers=localhost:9093
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
   username="admin" \
   password="admin-secret" \
   user_admin="admin-secret";
ssl.truststore.location=/mnt/data/kafka/config/truststore/kafka.truststore.jks
ssl.truststore.password=password

consumer.properties

bootstrap.servers=localhost:9093
group.id=test-consumer-group
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
   username="admin" \
   password="admin-secret" \
   user_admin="admin-secret";

ssl.truststore.location=/mnt/data/kafka/config/truststore/kafka.truststore.jks
ssl.truststore.password=password

这是我的Java Kafka生产者应用程序

private KafkaProducer<String, String> producer;
    private String address;
    private final int BATCH_SIZE = 16384 * 4;

    private Properties setProperties() {
        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, address);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, BATCH_SIZE);
        properties.put(ProducerConfig.LINGER_MS_CONFIG, 200);
        properties.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
        properties.put("acks", "all");
        properties.put("sasl.mechanism", "PLAIN");
        properties.put("security.protocol", "SASL_SSL");
        properties.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\" user_admin=\"admin-secret\";");
        properties.put("ssl.truststore.location", "/mnt/data/kafka/config/truststore/kafka.truststore.jks");
        properties.put("ssl.truststore.password", "password");
        return properties;
    }

    public void createTopicWithPartitions(String topicName, int partitionsCount) throws ExecutionException, InterruptedException {
        Properties properties = new Properties();
        properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, address);
        AdminClient adminClient = AdminClient.create(properties);

        boolean isTopicExists = adminClient.listTopics().names().get().stream()
                .anyMatch(name -> name.equals(topicName));

        if (isTopicExists) {
            System.out.println("Topic already exists");
        } else {
            NewTopic newTopic = new NewTopic(topicName, partitionsCount, (short) 1);
            adminClient.createTopics(Collections.singleton(newTopic)).all().get();
        }

        adminClient.close();
    }

    public void sendMessages(String topicName, String payload, int messagesCount) {
        for (int i = 0; i < messagesCount; i++) {
            String partitionKey = DataUtils.generateSourceDeviceId(15).toUpperCase();
            producer.send(new ProducerRecord<>(topicName, partitionKey, payload));
        }
    }

    public KafkaMessagesProducer(String address) {
        this.address = address;
        this.producer = new KafkaProducer<>(setProperties());
    }

    public int getBATCH_SIZE() {
        return BATCH_SIZE;
    }

正如我之前在控制台生产者/消费者工作中所描述的那样,我的Java应用程序遇到SSL握手错误,但是在关闭SASL_SSL协议后,我的Java应用程序仍然工作正常。

UPD :此网站使用的证书生成工具: https://github.com/confluentinc/confluent-platform-security-tools/blob/master/kafka-generate-ssl.sh

1 个答案:

答案 0 :(得分:0)

我对createTopicWithPartitions方法有疑问。我重写了(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,address)在setProperties()方法中创建的属性