我正在尝试使用spring.boot version: 2.1.5.RELEASE
和Kafka version: 2.0.1
启动新项目,但遇到了无法通过SSL连接到我的kafka远程代理的问题。
与此同时,我的带有spring.boot version: 2.0.1.RELEASE
和Kafka version: 1.0.1
的旧项目运行正常。
application.yml
spring:
cloud:
stream:
kafka:
binder:
brokers: my-kafka:9093
autoCreateTopics: false
bindings:
customers-in:
destination: customers
contentType: application/json
customers-out:
destination: customers
contentType: application/json
kafka:
ssl:
protocol: SSL
trust-store-location: guest.truststore
trust-store-password: 123456
key-password: 123456
key-store-location: guest.keystore
key-store-password: 123456
收到这样的错误消息:
2019-06-10 17:59:03.636 ERROR 30220 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
旧项目属性(一切正常)
spring.kafka.bootstrap-servers=my-kafka:9093
spring.kafka.consumer.properties.[group.id]=group_23_spring-kafka
# SSL
spring.kafka.properties.[security.protocol]=SSL
spring.kafka.ssl.trust-store-location=guest.truststore
spring.kafka.ssl.trust-store-password=123456
spring.kafka.ssl.key-store-password=123456
spring.kafka.ssl.key-store-location=guest.keystore
spring.kafka.ssl.key-password=123456
如果我用
更新我的工作项目, <dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.0.1</version>
</dependency>
它会产生相同的错误
我的新旧版本的kafka INFO日志Producer配置看起来几乎相同
新配置kafka version: 2.0.1
spring-boot version: 2.1.5.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm =
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
旧配置kafka version: 1.0.1
spring-boot version: 2.0.1.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
我也在question的github上问了同样的spring-cloud-stream-binder-kafka,但我们没有找到结论
有人遇到过同样的问题吗?或者,也许知道我必须如何配置application.yml才能通过SSL连接到我的kafka代理?
谢谢