我一直在尝试使用Helm charts部署Kafka。因此,我为Kafka Pod定义了NodePort服务。我检查了具有相同主机和端口的控制台Kafka生产者和使用者-它们正常工作。但是,当我将Spark应用程序创建为数据使用者并将Kafka创建为生产者时,它们无法连接到Kafka service0。我将minikube ip(而不是节点ip)用于主机和服务NodePort端口。 尽管在Spark日志中,我看到NodePort服务可以解析端点,并且代理被发现为Pod寻址和端口:
.Net
如何更改此行为?
NodePort服务定义如下:
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Discovered group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null)
INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Revoking previously assigned partitions []
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] (Re-)joining group
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2147483645 (/172.17.0.20:9092) could not be established. Broker may not be available.
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2 (/172.17.0.20:9092) could not be established. Broker may not be available.
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 0 (/172.17.0.12:9092) could not be established. Broker may not be available.
火花使用者配置:
kind: Service
apiVersion: v1
metadata:
name: kafka-service
spec:
selector:
app: cp-kafka
release: my-confluent-oss
ports:
- protocol: TCP
targetPort: 9092
port: 32400
nodePort: 32400
type: NodePort
Kafka生产者配置:
def kafkaParams() = Map[String, Object](
"bootstrap.servers" -> "192.168.99.100:32400",
"schema.registry.url" -> "http://192.168.99.100:8081",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[KafkaAvroDeserializer],
"group.id" -> "avro_data",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
Kafka的所有K8s服务:
props.put("bootstrap.servers", "192.168.99.100:32400")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32500")
答案 0 :(得分:2)
当我尝试从外部访问在minikube上运行的kafka代理(cp-helm-chart)时,我遇到了类似的问题。
这是我如何解决的。使用helm安装之前,请从本地存储库进行安装。
现在,您可以通过将bootstrap.servers指向196.169.99.100:31090从外部访问在k8s集群中运行的kafka代理