python无法捕获confluent_kafka的KafkaException

时间:2017-12-22 07:11:24

标签: python apache-kafka confluent-kafka

以下是我的代码的一部分:

class KafkaProducer:

    def __init__(self):
    pass

    bootstrap_server_host = system_config.get_kafka_bootstrap_server()
    producer = Producer({'bootstrap.servers': bootstrap_server_host, "log.connection.close":False})

    @classmethod
    def send(cls, topic, key, value, data_type=None, uid=None):
        try:
            data = {"data": value, "createTime": long(time.time() * 1000)}
            if data_type is not None:
            data["type"] = int(data_type)
            if uid is not None:
            data["uid"] = long(uid)
            cls.producer.produce(topic, json.dumps(data), key)
            cls.producer.poll(0)
        except BufferError as e:
            logger.error('%% Local producer queue is full ' \
                         '(%d messages awaiting delivery): try again\n' %
                         len(cls.producer))
            raise e

class new_application_scanner():
   @classmethod
    def scan_new_application(cls):
        db_source = None
        try:
            db_source = DBConnector().connect()
            db_cur = db_source.cursor()

            ...

            KafkaProducer.send("RiskEvent", str(uid),
                   {"uid": uid, "country_id": user_info[1], "event_id": constant.RISK_EVENT_NEW_APPLICATION})

            ...
        except Exception as e:
            logger.error(traceback.format_exc())
        finally:
            if db_source is not None:
            db_source.close()




def run_scan_new_application():
    while is_scan_new_application_active:
    try:
        logging.info("scan_new_application starts at %s",time.time())
        new_application_scanner.scan_new_application()
        logging.info("scan_new_application ends at %s", time.time())
    except Exception as e:
        logging.error("new_application_scanner Error:%s",format(e))
        logging.error(traceback.format_exc())
    time.sleep(10)


t1 = threading.Thread(target=run_scan_new_application, name='run_scan_new_application', args=([]))
t1.start()

我有一个由两台服务器组成的kafka组。当我一个接一个地重启两个服务器时,KafkaProducer.send()会抛出KafkaException(可能是confluent_kafka中的一些错误),并且有一些异常日志。

奇怪的是Exception继续抛出scan_new_application,run_scan_new_application中也有异常日志。甚至线程都停了。这是异常日志:

2017-12-21 07:11:49 INFO pre_risk_control_flow.py:71 pid-16984 scan_new_application starts at 1513840309.6
2017-12-21 07:11:49 ERROR new_application_scan.py:165 pid-16984 Traceback (most recent call last):
  File "/home/ubuntu/data/code/risk/Feature_Engine/data_retrive/pre_risk_control_flow/new_application_scan.py", line 163, in scan_new_application
    {"uid": uid, "country_id": user_info[1], "event_id": constant.RISK_EVENT_NEW_APPLICATION})
  File "/home/ubuntu/data/code/risk/Feature_Engine/data_retrive/kafka_client/Producer.py", line 27, in send
    cls.producer.produce(topic, json.dumps(data), key)
KafkaException: KafkaError{code=_UNKNOWN_TOPIC,val=-188,str="Unable to produce message: Local: Unknown topic"}

2017-12-21 07:11:49 ERROR pre_risk_control_flow.py:75 pid-16984 new_application_scanner Error:KafkaError{code=_UNKNOWN_TOPIC,val=-188,str="Unable to produce message: Local: Unknown topic"}

1 个答案:

答案 0 :(得分:0)

底层客户端正在提升KafkaException KafkaError{code=_UNKNOWN_TOPIC..},因为它(现在)知道群集中不存在所请求的主题(并且禁用了自动主题创建)。这是预期的。

您在run_scan_new_application中看到了例外情况,因为您未在KafkaException中抓住send()