filebeat无法解析kafka输出连接器中的复杂主机参数

时间:2018-05-09 02:16:50

标签: apache-kafka filebeat

我正在使用Filebeat - > Kafka输出连接器,我想根据filebeat正在处理的消息中传递的信息构建主机和主题参数。

令我惊讶的是,指定完全相同的表达式导致它被解析为主题而不是主机字段。关于如何实现目标的任何建议?

我的配置如下:

kafka.yaml: |
  processors:
  - add_kubernetes_metadata:
      namespace: {{ .Release.Namespace }}
  # Drop all log lines that don't contain kubernetes.labels.entry field
  - drop_event:
      when:
          not:
            regexp:
              kubernetes.labels.entry: ".*"
  filebeat.config_dir: /conf/
  output.kafka: 
    hosts: '%{[kubernetes][labels][entry]}'
    topic: '%{[kubernetes][labels][entry]}'
    required_acks: 1
    version: 0.11.0.0
    client_id: filebeat
    bulk_max_size: 100
    max_message_bytes: 20480

这是我从filebeat获取的错误消息:

2018/05/09 01:54:29.805431 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]} dial tcp: address %{[kubernetes][labels][entry]}: missing port in address]]: %!s(MISSING)

我确实尝试将端口添加到上面的配置中,然后错误消息仍显示该字段尚未解析:

2018/05/09 02:13:41.392742 log.go:36: INFO client/metadata fetching metadata for all topics from broker [[%{[kubernetes][labels][entry]}:9092]]
2018/05/09 02:13:41.392854 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]}:9092 dial tcp: address %{[kubernetes][labels][entry]}:9092: unexpected '[' in address]]: %!s(MISSING)

1 个答案:

答案 0 :(得分:0)

我在弹性论坛上找到了the answer

You cannot control hosts or files (in the case of the file output) via variables. Doing so would require Beats to manage state and connections to each different host. You can only use variables to control the destination topic, but not the broker.

因此,目前无法实现我想做的事情。