Filbeat Kubernetes处理器和过滤

时间:2017-12-07 10:28:32

标签: elasticsearch logging kubernetes kibana filebeat

我正在尝试使用Filebeat将我的K8s pod日志发送到Elasticsearch。

我在网上关注指南:https://www.elastic.co/guide/en/beats/filebeat/6.0/running-on-kubernetes.html

一切都按预期工作但是我想从系统pod中过滤掉事件。我更新的配置如下:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: log
      paths:
        - /var/lib/docker/containers/*/*.log
  multiline.pattern: '^\s'
  multiline.match: after
  json.message_key: log
  json.keys_under_root: true
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        namespace: ${POD_NAMESPACE}
    - drop_event.when.regexp:
        or:
          kubernetes.pod.name: "weave-net.*"
          kubernetes.pod.name: "external-dns.*"
          kubernetes.pod.name: "nginx-ingress-controller.*"
          kubernetes.pod.name: "filebeat.*"

我试图通过以下方式忽略weave-netexternal-dnsingress-controllerfilebeat事件:

- drop_event.when.regexp:
    or:
      kubernetes.pod.name: "weave-net.*"
      kubernetes.pod.name: "external-dns.*"
      kubernetes.pod.name: "nginx-ingress-controller.*"
      kubernetes.pod.name: "filebeat.*"

然而,他们继续抵达Elasticsearch。

提前谢谢你:)

3 个答案:

答案 0 :(得分:3)

条件必须是一个清单:

- drop_event.when.regexp:
    or:
      - kubernetes.pod.name: "weave-net.*"
      - kubernetes.pod.name: "external-dns.*"
      - kubernetes.pod.name: "nginx-ingress-controller.*"
      - kubernetes.pod.name: "filebeat.*"

我不确定您的参数顺序是否有效。我的一个工作示例如下所示:

- drop_event:
    when:
      or:
        # Exclude traces from Zipkin
        - contains.path: "/api/v"
        # Exclude Jolokia calls
        - contains.path: "/jolokia/?"
        # Exclude pinging metrics
        - equals.path: "/metrics"
        # Exclude pinging health
        - equals.path: "/health"

答案 1 :(得分:3)

这在filebeat 6.1.3

中对我有用
        - drop_event.when:
            or:
            - equals:
                kubernetes.container.name: "filebeat"
            - equals:
                kubernetes.container.name: "prometheus-kube-state-metrics"
            - equals:
                kubernetes.container.name: "weave-npc"
            - equals:
                kubernetes.container.name: "nginx-ingress-controller"
            - equals:
                kubernetes.container.name: "weave"

答案 2 :(得分:2)

我使用的是另一种方法,就日志管道中传输的日志数量而言效率较低。

与你的方法类似,我使用守护进程在我的节点上部署了一个filebeat实例。这里没什么特别的,这是我正在使用的配置:

apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.config:
      prospectors:
        # Mounted `filebeat-prospectors` configmap:
        path: ${path.config}/prospectors.d/*.yml
        # Reload prospectors configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    processors:
      - add_cloud_metadata:

    output.logstash:
      hosts: ['logstash.elk.svc.cluster.local:5044']
kind: ConfigMap
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat-config

这是探矿者的一个:

apiVersion: v1
data:
  kubernetes.yml: |-
    - type: log
      paths:
        - /var/lib/docker/containers/*/*.log
      json.message_key: log
      json.keys_under_root: true
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            namespace: ${POD_NAMESPACE}
kind: ConfigMap
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat-prospectors

Daemonset规范:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        command:
        - /usr/share/filebeat/filebeat
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: docker.elastic.co/beats/filebeat:6.0.1
        imagePullPolicy: IfNotPresent
        name: filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/prospectors.d
          name: prospectors
          readOnly: true
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 384
          name: filebeat-prospectors
        name: prospectors
      - emptyDir: {}
        name: data

基本上,来自所有容器的所有日志的所有数据都被转发到logstash,可以在服务端点访问:logstash.elk.svc.cluster.local:5044(“elk”命名空间中名为“logstash”的服务)。

为简洁起见,我只给你logstash的配置(如果你需要更多关于kubernetes的具体帮助,请在评论中提问):

logstash.yml文件非常基本:

http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

只是指示我挂载管道配置文件的目录的挂载点,如下所示:

10 beats.conf: 声明filebeat的输入(端口5044必须使用名为“logstash”的服务公开)

input {
  beats {
    port => 5044
    ssl => false
  }
}

49过滤logs.conf: 这个过滤器基本上会丢弃来自没有“elk”标签的pod的日志。对于具有“elk”标签的pod,它会保留来自pod的“elk”标签中指定的容器的日志。例如,如果Pod有两个容器,称为“nginx”和“python”,则将标签“elk”放置为值“nginx”将只保留来自nginx容器的日志并删除python容器。日志的类型设置为pod运行的命名空间。 这可能不适合所有人(你将在弹性搜索中为属于命名空间的所有日志创建单个索引),但它对我有用,因为我的日志是同源的。

filter {
    if ![kubernetes][labels][elk] {
        drop {}
    }
    if [kubernetes][labels][elk] {
        # check if kubernetes.labels.elk contains this container name
        mutate {
          split => { "kubernetes[labels][elk]" => "." }
        }
        if [kubernetes][container][name] not in [kubernetes][labels][elk] {
          drop {}
        }
        mutate {
          replace => { "@metadata[type]" => "%{kubernetes[namespace]}" }
          remove_field => [ "beat", "host", "kubernetes[labels][elk]", "kubernetes[labels][pod-template-hash]", "kubernetes[namespace]", "kubernetes[pod][name]", "offset", "prospector[type]", "source", "stream", "time" ]
          rename => { "kubernetes[container][name]" => "container"  }
          rename => { "kubernetes[labels][app]" => "app"  }
        }
    }
}

其余配置是关于日志解析,在此上下文中不相关。 唯一的另一个重要部分是输出:

99-output.conf: 将数据发送到elasticsearch:

output {
  elasticsearch {
    hosts => ["http://elasticsearch.elk.svc.cluster.local:9200"]
    manage_template => false
    index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

希望你明白这一点。

此方法的PRO

  • 一旦部署了filebeat和logstash,只要您不需要解析新类型的日志,就不需要更新filebeat或logstash配置以获取kibana中的新日志。您只需在pod模板中添加标签即可。
  • 默认情况下,所有日志文件都会被删除,只要您没有明确地放置标签。

此方法的CONs

  • ALL pods中的所有日志都来自filebeat和logstash,并且仅在logstash中被删除。这对于logstash来说是很多工作,并且可能会占用资源,具体取决于群集中的pod数量。

我确信这个问题有更好的方法,但我认为这个解决方案非常方便,至少对我的用例而言。