为什么我在启动一些消费者时遇到错误PartitionOwnedError和ConsumerStoppedException

时间:2016-09-21 11:36:31

标签: pykafka

我使用pykafka从kafka主题获取消息,然后执行一些过程并更新到mongodb。由于pymongodb每次只能更新一个项目,所以我开始100个进程。但是在启动时,某些进程会出现错误“PartitionOwnedError和ConsumerStoppedException”。我不知道为什么。 谢谢。

kafka_cfg = conf['kafka']
kafka_client = KafkaClient(kafka_cfg['broker_list'])                        
topic = kafka_client.topics[topic_name]                 

balanced_consumer = topic.get_balanced_consumer(
consumer_group=group,
auto_commit_enable=kafka_cfg['auto_commit_enable'],
zookeeper_connect=kafka_cfg['zookeeper_list'],
zookeeper_connection_timeout_ms = kafka_cfg['zookeeper_conn_timeout_ms'],
consumer_timeout_ms = kafka_cfg['consumer_timeout_ms'],
)
while(1):
    for msg in balanced_consumer:
        if msg is not None:
            try:
                value = eval(msg.value)
                id = long(value.pop("id"))
                value["when_update"] = datetime.datetime.now()
                query = {"_id": id}}

                result = collection.update_one(query, {"$set": value}, True)
            except Exception, e:
                log.error("Fail to update: %s, msg: %s", e, msg.value)

>

Traceback (most recent call last):
  File "dump_daily_summary.py", line 182, in <module>
    dump_daily_summary.run()
  File "dump_daily_summary.py", line 133, in run
    for msg in self.balanced_consumer:
  File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 745, in __iter__
    message = self.consume(block=True)
  File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 734, in consume
    raise ConsumerStoppedException
pykafka.exceptions.ConsumerStoppedException

&GT;

Traceback (most recent call last):
  File "dump_daily_summary.py", line 182, in <module>
    dump_daily_summary.run()
  File "dump_daily_summary.py", line 133, in run
    for msg in self.balanced_consumer:
  File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 745, in __iter__
    message = self.consume(block=True)
  File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 726, in consume
    self._raise_worker_exceptions()
  File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 271, in _raise_worker_exceptions
    raise ex
pykafka.exceptions.PartitionOwnedError

3 个答案:

答案 0 :(得分:1)

PartitionOwnedError :检查同一个consumer_group中是否有一些后台进程消耗,可能没有足够的可用分区来启动另一个消费者。

ConsumerStoppedException :您可以尝试升级pykafka版本(https://github.com/Parsely/pykafka/issues/574

答案 1 :(得分:0)

我遇到了和你一样的问题。但是,我对其他人感到困惑。解决方案,如为消费者添加足够的分区或更新pykafka的版本。 事实上,我满足了上述条件。

以下是工具的版本:

  

python 2.7.10

     

kafka 2.11-0.10.0.0

     

zookeeper 3.4.8

     

pykafka 2.5.0

这是我的代码:

class KafkaService(object):
    def __init__(self, topic):
        self.client_hosts = get_conf("kafka_conf", "client_host", "string")
        self.topic = topic
        self.con_group = topic
        self.zk_connect = get_conf("kafka_conf", "zk_connect", "string")

    def kafka_consumer(self):
        """kafka-consumer client, using pykafka

        :return: {"id": 1, "url": "www.baidu.com", "sitename": "baidu"}
        """
        from pykafka import KafkaClient
        consumer = ""
        try:
            kafka = KafkaClient(hosts=str(self.client_hosts))
            topic = kafka.topics[self.topic]

            consumer = topic.get_balanced_consumer(
                consumer_group=self.con_group,
                auto_commit_enable=True,
                zookeeper_connect=self.zk_connect,
            )
        except Exception as e:
            logger.error(str(e))

        while True:
            message = consumer.consume(block=False)
            if message:
                print "message:", message.value
                yield message.value

两个例外(ConsumerStoppedExceptionPartitionOwnedError)由consum(block=True)pykafka.balancedconsumer函数引发。

当然,我建议您阅读该功能的源代码。

有一个参数 block = True ,在将其改为 False 后,程序不能属于例外。

然后卡夫卡消费者工作正常。

答案 2 :(得分:0)

这种行为受到最近发现并且目前正在修复的长期bug的影响。我们在Parse.ly的生产中使用的解决方法是在一个环境中运行我们的消费者,这些环境处理这些错误时会自动重启它们,直到拥有所有分区。