错误:Kubernetes群集中的Hearbeat日志中没有此类主机

时间:2019-10-30 10:38:55

标签: docker elasticsearch kubernetes elastic-stack heartbeat

我已经在Kubernetes Cluster上建立了ELK堆栈,目前使用elasticsearch:v6.2.4logstash:6.3.0logspoutmetricbeat和{{1 }}。

除心跳之外,所有其他服务都可以正常工作。

心跳中的问题是我收到了错误的日志。

--------------------------------------------------- -------------------------------------------------- -----

这是我的 Kubernetes配置文件,您可以在文件内看到 heartbeat.yaml

heartbeat

--------------------------------------------------- -------------------------------------------------- -----

日志

ICMP日志-一个日志

apiVersion: v1
kind: ConfigMap
metadata:
  name: heartbeat-config
  namespace: kube-system
  labels:
    k8s-app: heartbeat
data:
  heartbeat.yml: |
    heartbeat.monitors:
    - type: http
      schedule: '@every 5s'
      urls: ["http://elasticsearch-logging:9200","http://kibana-logging:5601","http://cfo:3003/v1","http://front-end:5000","http://crm-proxy:3002/v1","http://mongoclient:3000","http://cron-scheduler:3007/v1","http://cso:3005","http://database:27017","http://direct-debits:3009/v1","http://loan-management:3008/v1","http://settings:4001/core"]
      check.response.status: 200
    - type: icmp
      schedule: '@every 5s'
      hosts:
        - elasticsearch-logging
        - kibana-logging
        - cfo
        - front-end
        - crm-proxy
        - cso
        - mongoclient
        - cron-scheduler
        - database
        - direct-debits
        - loan-management
    processors:
    - add_cloud_metadata:
    output.elasticsearch:
      hosts: ['elasticsearch-logging:9200']
      username: elastic
      password: changeme

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: heartbeat
  namespace: kube-system
  labels:
    k8s-app: heartbeat
spec:
  template:
    metadata:
      labels:
        k8s-app: heartbeat
    spec:
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: heartbeat
        image: docker.elastic.co/beats/heartbeat:6.3.0
        args: [
          "-c", "/usr/share/heartbeat/heartbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-logging
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /usr/share/heartbeat/heartbeat.yml
          readOnly: true
          subPath: heartbeat.yml
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: heartbeat-config
      # We set an `emptyDir` here to ensure the manifest will deploy correctly.
      # It's recommended to change this to a `hostPath` folder, to ensure internal data
      # files survive pod changes (ie: version upgrade)
      - name: data
        emptyDir: {}
---

# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  namespace: kube-system
  name: heartbeat
  labels:
    k8s-app: heartbeat
spec:
  template:
    metadata:
      labels:
        k8s-app: heartbeat
    spec:
      containers:
      - name: heartbeat
        image: docker.elastic.co/beats/heartbeat:6.3.0
        args: [
          "-c", "/usr/share/heartbeat/heartbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-logging
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /usr/share/heartbeat/heartbeat.yml
          readOnly: true
          subPath: heartbeat.yml
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: heartbeat-config

HTTP日志-一个日志

{
  "_index": "heartbeat-6.3.0-2019.10.30",
  "_type": "doc",
  "_id": "DA8GG24BaP0t7Q7zj5w-",
  "_score": 1,
  "_source": {
    "@timestamp": "2019-10-30T04:57:24.052Z",
    "beat": {
      "name": "heartbeat-64c4bfc49f-xgx2d",
      "hostname": "heartbeat-64c4bfc49f-xgx2d",
      "version": "6.3.0"
    },
    "meta": {
      "cloud": {
        "region": "eu-central-1",
        "availability_zone": "eu-central-1a",
        "provider": "ec2",
        "instance_id": "i-02f044f80723acc15",
        "machine_type": "t2.medium"
      }
    },
    "resolve": {
      "host": "crm-proxy"
    },
    "error": {
      "type": "io",
      "message": "lookup crm-proxy on 10.100.0.10:53: no such host"
    },
    "monitor": {
      "duration": {
        "us": 13120
      },
      "status": "down",
      "id": "icmp-icmp-host-ip@crm-proxy",
      "name": "icmp",
      "type": "icmp",
      "host": "crm-proxy"
    },
    "type": "monitor",
    "host": {
      "name": "heartbeat-64c4bfc49f-xgx2d"
    }
  },
  "fields": {
    "@timestamp": [
      "2019-10-30T04:57:24.052Z"
    ]
  }
}

例如:如果我执行服务并curl缩http://crm-proxy:3002/v1,则会得到响应。

注意:我在Docker Swarm上运行了相同的服务,并对弹性搜索使用了相同的listenbeat.yaml配置,然后我得到了正确的结果

无法弄清楚为什么心跳服务抛出没有这样的主机错误。

这是 kubectl get svc 的输出(命名空间为默认

{
  "_index": "heartbeat-6.3.0-2019.10.30",
  "_type": "doc",
  "_id": "axozHG4BaP0t7Q7zAcFW",
  "_score": 1,
  "_source": {
    "@timestamp": "2019-10-30T10:25:34.051Z",
    "resolve": {
      "host": "crm-proxy"
    },
    "tcp": {
      "port": 3002
    },
    "type": "monitor",
    "host": {
      "name": "heartbeat-64c4bfc49f-xgx2d"
    },
    "beat": {
      "name": "heartbeat-64c4bfc49f-xgx2d",
      "hostname": "heartbeat-64c4bfc49f-xgx2d",
      "version": "6.3.0"
    },
    "meta": {
      "cloud": {
        "instance_id": "i-02f044f80723acc15",
        "machine_type": "t2.medium",
        "region": "eu-central-1",
        "availability_zone": "eu-central-1a",
        "provider": "ec2"
      }
    },
    "error": {
      "message": "lookup crm-proxy on 10.100.0.10:53: no such host",
      "type": "io"
    },
    "monitor": {
      "type": "http",
      "host": "crm-proxy",
      "duration": {
        "us": 34904
      },
      "status": "down",
      "id": "http@http://crm-proxy:3002/v1",
      "scheme": "http",
      "name": "http"
    },
    "http": {
      "url": "http://crm-proxy:3002/v1"
    }
  },
  "fields": {
    "@timestamp": [
      "2019-10-30T10:25:34.051Z"
    ]
  }
}

任何帮助将不胜感激!

0 个答案:

没有答案