我正在python中运行confluent_kafka客户端。目前,在尝试生成并消费消息时我没有收到任何错误,但是问题是生产者说消息成功了,但是消费者找不到任何消息。
我创建了一个主题,这是我正在使用的类:
from confluent_kafka import Producer, Consumer
from config import config
import json
class Kafka:
"""
Kafka Handler.
"""
def __init__(self, kafka_brokers_sasl, api_key):
"""
Arguments:
kafka_brokers_sasl {str} -- String containing kafka brokers separated by comma (no spaces)
api_key {str} -- Kafka Api Key
"""
self.driver_options = {
'bootstrap.servers': kafka_brokers_sasl,
'sasl.mechanisms': 'PLAIN',
'security.protocol': 'SASL_SSL',
'sasl.username': 'token',
'sasl.password': api_key,
'log.connection.close' : False,
#'debug': 'all'
}
self.producer_options = {
'client.id': 'kafka-python-console-sample-producer'
}
self.producer_options.update(self.driver_options)
self.consumer_options = {
'client.id': 'kafka-python-console-sample-consumer',
'group.id': 'kafka-python-console-sample-group'
}
self.consumer_options.update(self.driver_options)
self.running = None
def stop(self):
self.running = False
def delivery_report(self, err, msg):
""" Called once for each message produced to indicate delivery result.
Triggered by poll() or flush(). """
if err is not None:
print('Message delivery failed: {}'.format(err))
else:
print('Message delivered to {} [{}]'.format(msg.topic(), msg.partition()))
def produce(self, topic, data): # Function for producing/uploading data to a Kafka topic
p = Producer(self.producer_options)
print("Running?")
# Asynchronously produce a message, the delivery report callback will be triggered from poll() above, or flush() below, when the message has been successfully delivered or failed permanently.
p.produce(topic, data, callback=self.delivery_report)
# Wait for any outstanding messages to be delivered and delivery report callbacks to be triggered.
p.flush()
print("Done?")
def consume(self, topic, method_class=None): # Function for consuming/reading data from a Kafka topic. Works as a listener and triggers the run() function on a method_class
print("raaa")
kafka_consumer = Consumer(self.consumer_options)
kafka_consumer.subscribe([topic])
# Now loop on the consumer to read messages
print("Running?")
self.running = True
while self.running:
msg = kafka_consumer.poll()
print(msg)
if msg is not None and msg.error() is None:
print('Message consumed: topic={0}, partition={1}, offset={2}, key={3}, value={4}'.format(
msg.topic(),
msg.partition(),
msg.offset(),
msg.key().decode('utf-8'),
msg.value().decode('utf-8')))
else:
print('No messages consumed')
print("Here?")
kafka_consumer.unsubscribe()
kafka_consumer.close()
print("Ending?")
mock = {'yas': 'yas', 'yas2': 'yas2'}
kafka = Kafka(config['kafka']['kafka_brokers_sasl'], config['kafka']['api_key'])
kafka.produce(config['kafka']['topic'], json.dumps(mock))
kafka.consume(config['kafka']['topic'])
运行此命令,我得到打印的照片:
Running?
Message delivered to DANIEL_TEST [0]
Done?
raaa
Running?
<cimpl.Message object at 0x104e4c390>
No messages consumed
答案 0 :(得分:0)
我不是python专家,但是看起来您已经产生了消息之后就开始使用它了?
kafka.produce(config['kafka']['topic'], json.dumps(mock))
kafka.consume(config['kafka']['topic'])
您需要在调用生产函数之前先调用消耗函数,因为启动新的使用者时,该使用者的默认偏移量将是最新的。因此,例如,如果您在偏移量5处生成了一条消息,然后启动了一个新的使用者,则默认情况下,您的使用者偏移量将在偏移量6处,并且不会使用在偏移量5处产生的消息。
解决方案是要么在产生任何东西之前开始使用,要么将使用者配置设置为从偏移量的开头开始使用消息。可以通过将auto.offset.reset
设置为earliest
来完成,但是我认为第一种解决方案更简单。
答案 1 :(得分:0)
我有同样的问题。
pinfo.day < "1"
必须包含SSL证书路径,因此您必须设置 if (pinfo.month == "2" && (pinfo.day > "31" || pinfo.day < "1"))
或此处记录的等效位置:https://github.com/ibm-messaging/event-streams-samples/blob/master/kafka-python-console-sample/app.py#L75
然后它起作用了!