无法访问Kubernetes上的Grafana UI

时间:2016-04-21 15:14:46

标签: kubernetes influxdb grafana

我使用COREOS指南在openstack上设置了K8S群集。

我在访问http://master-ip:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

上的GRAFANA用户界时遇到以下错误
Error: 'dial tcp 172.17.0.5:3000: i/o timeout'
Trying to reach: 'http://172.17.0.5:3000/'

我可以在Influxdb-nodeip:8083访问InfluxDB UI。

我可以从节点内卷曲到 172.17.0.5:3000

我遵循的步骤:

  1. 创建了包含1个主节点和1个节点的K8S群集。
  2. 创建名称空间
  3. 设置DNS
  4. 使用busybox示例确认DNS正在运行。
  5. 设置InfluxDB和Grafana。
  6. Grafana容器日志

    2016/04/21 14:53:33 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
    .Grafana is up and running.
    Creating default influxdb datasource...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   242  100    37  100   205   3274  18143 --:--:-- --:--:-- --:--:-- 18636
    HTTP/1.1 200 OK
    Content-Type: application/json; charset=UTF-8
    Set-Cookie: grafana_sess=cd44a6ed54b863df; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
    Date: Thu, 21 Apr 2016 14:53:34 GMT
    Content-Length: 37
    
    {"id":1,"message":"Datasource added"}
    Importing default dashboards...
    Importing /dashboards/cluster.json ...
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 71639  100    49  100 71590    539   769k --:--:-- --:--:-- --:--:--  776k
    HTTP/1.1 100 Continue
    

    集群信息

    cluster-info
    Kubernetes master is running at <master>:8080
    Heapster is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/heapster
    KubeDNS is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
    Grafana is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
    InfluxDB is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
    

    版本

    Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"}
    Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"}
    

    节点iptables:sudo iptables -n -t nat -L

    Chain PREROUTING (policy ACCEPT)
    target     prot opt source               destination
    KUBE-PORTALS-CONTAINER  all  --  0.0.0.0/0            0.0.0.0/0            /* handle ClusterIPs; NOTE: this must be before the NodePort rul                    es */
    DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    KUBE-NODEPORT-CONTAINER  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE                    : this must be the last rule in the chain */
    
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
    KUBE-PORTALS-HOST  all  --  0.0.0.0/0            0.0.0.0/0            /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
    DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
    KUBE-NODEPORT-HOST  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: thi                    s must be the last rule in the chain */
    
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
    MASQUERADE  tcp  --  172.17.0.5           172.17.0.5           tcp dpt:8086
    MASQUERADE  tcp  --  172.17.0.5           172.17.0.5           tcp dpt:8083
    
    Chain DOCKER (2 references)
    target     prot opt source               destination
    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8086 to:172.17.0.5:8086
    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8083 to:172.17.0.5:8083
    
    Chain KUBE-NODEPORT-CONTAINER (1 references)
    target     prot opt source               destination
    
    Chain KUBE-NODEPORT-HOST (1 references)
    target     prot opt source               destination
    
    Chain KUBE-PORTALS-CONTAINER (1 references)
    target     prot opt source               destination
    REDIRECT   tcp  --  0.0.0.0/0            10.100.0.1           /* default/kubernetes: */ tcp dpt:443 redir ports 43104
    REDIRECT   udp  --  0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns */ udp dpt:53 redir ports 60423
    REDIRECT   tcp  --  0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 redir ports 35036
    REDIRECT   tcp  --  0.0.0.0/0            10.100.176.182       /* kube-system/monitoring-grafana: */ tcp dpt:80 redir ports 41454
    REDIRECT   tcp  --  0.0.0.0/0            10.100.17.81         /* kube-system/heapster: */ tcp dpt:80 redir ports 40296
    REDIRECT   tcp  --  0.0.0.0/0            10.100.228.184       /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 redir ports 39963
    REDIRECT   tcp  --  0.0.0.0/0            10.100.228.184       /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 redir ports 40214
    
    Chain KUBE-PORTALS-HOST (1 references)
    target     prot opt source               destination
    DNAT       tcp  --  0.0.0.0/0            10.100.0.1           /* default/kubernetes: */ tcp dpt:443 to:10.10.1.84:43104
    DNAT       udp  --  0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns */ udp dpt:53 to:10.10.1.84:60423
    DNAT       tcp  --  0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 to:10.10.1.84:35036
    DNAT       tcp  --  0.0.0.0/0            10.100.176.182       /* kube-system/monitoring-grafana: */ tcp dpt:80 to:10.10.1.84:41454
    DNAT       tcp  --  0.0.0.0/0            10.100.17.81         /* kube-system/heapster: */ tcp dpt:80 to:10.10.1.84:40296
    DNAT       tcp  --  0.0.0.0/0            10.100.228.184       /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 to:10.10.1.84:39963
    DNAT       tcp  --  0.0.0.0/0            10.100.228.184       /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 to:10.10.1.84:40214
    

    描述pod --namespace = kube-system monitoring-Influxdb-grafana-v3-grbs1

    Name:                           monitoring-influxdb-grafana-v3-grbs1
    Namespace:                      kube-system
    Image(s):                       gcr.io/google_containers/heapster_influxdb:v0.5,gcr.io/google_containers/heapster_grafana:v2.6.0-2
    Node:                           10.10.1.84/10.10.1.84
    Start Time:                     Thu, 21 Apr 2016 14:53:31 +0000
    Labels:                         k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v3
    Status:                         Running
    Reason:
    Message:
    IP:                             172.17.0.5
    Replication Controllers:        monitoring-influxdb-grafana-v3 (1/1 replicas created)
    Containers:
      influxdb:
        Container ID:       docker://4822dc9e98b5b423cdd1ac8fe15cb516f53ff45f48faf05b067765fdb758c96f
        Image:              gcr.io/google_containers/heapster_influxdb:v0.5
        Image ID:           docker://eb8e59964b24fd1f565f9c583167864ec003e8ba6cced71f38c0725c4b4246d1
        QoS Tier:
          memory:   Guaranteed
          cpu:      Guaranteed
        Limits:
          cpu:      100m
          memory:   500Mi
        Requests:
          cpu:              100m
          memory:           500Mi
        State:              Running
          Started:          Thu, 21 Apr 2016 14:53:32 +0000
        Ready:              True
        Restart Count:      0
        Environment Variables:
      grafana:
        Container ID:       docker://46888bd4a4b0c51ab8f03a17db2dbf5bfe329ef7c389b7422b86344a206b3653
        Image:              gcr.io/google_containers/heapster_grafana:v2.6.0-2
        Image ID:           docker://7553afcc1ffd82fe359fe7d69a5d0d7fef3020e45542caeaf95e5623ded41fbb
        QoS Tier:
          cpu:      Guaranteed
          memory:   Guaranteed
        Limits:
          cpu:      100m
          memory:   100Mi
        Requests:
          memory:           100Mi
          cpu:              100m
        State:              Running
          Started:          Thu, 21 Apr 2016 14:53:32 +0000
        Ready:              True
        Restart Count:      0
        Environment Variables:
          INFLUXDB_SERVICE_URL:             http://monitoring-influxdb:8086
          GF_AUTH_BASIC_ENABLED:            false
          GF_AUTH_ANONYMOUS_ENABLED:        true
          GF_AUTH_ANONYMOUS_ORG_ROLE:       Admin
          GF_SERVER_ROOT_URL:               /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
    Conditions:
      Type          Status
      Ready         True
    Volumes:
      influxdb-persistent-storage:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      grafana-persistent-storage:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      default-token-lacal:
        Type:       Secret (a secret that should populate this volume)
        SecretName: default-token-lacal
    Events:
      FirstSeen     LastSeen        Count   From                    SubobjectPath                           Reason                  Message
      ─────────     ────────        ─────   ────                    ─────────────                           ──────                  ───────
      23m           23m             5       {scheduler }                                                    FailedScheduling        Failed for reason PodFitsHostPorts and possibly others
      22m           22m             1       {kubelet 10.10.1.84}    implicitly required container POD       Created                 Created with docker id 97a95bd1f80a
      22m           22m             1       {scheduler }                                                    Scheduled               Successfully assigned monitoring-influxdb-grafana-v3-grbs1 to 10.10.1.84
      22m           22m             1       {kubelet 10.10.1.84}    implicitly required container POD       Pulled                  Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{grafana}                Pulled                  Container image "gcr.io/google_containers/heapster_grafana:v2.6.0-2" already present on machine
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{grafana}                Created                 Created with docker id 46888bd4a4b0
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{grafana}                Started                 Started with docker id 46888bd4a4b0
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{influxdb}               Pulled                  Container image "gcr.io/google_containers/heapster_influxdb:v0.5" already present on machine
      22m           22m             1       {kubelet 10.10.1.84}    implicitly required container POD       Started                 Started with docker id 97a95bd1f80a
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{influxdb}               Created                 Created with docker id 4822dc9e98b5
      22m           22m             1       {kubelet 10.10.1.84}    spec.containers{influxdb}               Started                 Started with docker id 4822dc9e98b5
    

    不知道还能分享什么。如果需要,我可以分享其他信息。请帮忙,我找不到任何解决办法。

    修改

    以下答案中建议的命令响应:

    kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v2-c2tj9
    
    J[04/21/16 23:30:19] [INFO] Loading configuration file /config/config.toml
    
    0+---------------------------------------------+
    0|  _____        __ _            _____  ____   |
    0| |_   _|      / _| |          |  __ \|  _ \  |
    0|   | |  _ __ | |_| |_   ___  _| |  | | |_) | |
    0|   | | | '_ \|  _| | | | \ \/ / |  | |  _ <  |
    0|  _| |_| | | | | | | |_| |>  <| |__| | |_) | |
    0| |_____|_| |_|_| |_|\__,_/_/\_\_____/|____/  |
    0+---------------------------------------------+
    

    谢谢

1 个答案:

答案 0 :(得分:3)

为了帮助深入了解问题所在,我建议您查看主人是否能够完全访问该广告连播。这有助于确定问题是在整个网络设置中,还是仅与主服务器的服务路由有关。

您应该能够验证apiserver是否可以通过kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v3-grbs1到达广告连播并查看它是否能够连接。如果它可以连接,那么服务路由有问题。如果不能,那么主设备在与节点通信时遇到问题。