* 4在连接到kubernetes仪表板上游时失败了connect()(113:主机没有路由)

时间:2018-08-27 20:44:23

标签: kubernetes

我的仪表盘没有显示。当我获得网址时,我会在nginx日志中看到它 在浏览器上,我得到 502 Bad Gateway nginx / 1.13.8

kubectl logs --follow -n kube-system deployment/nginx-ingress

2018/08/27 21:14:40 [error] 51#51: *4 connect() failed (113: No route to host) while connecting to upstream, client: 10.125.16.80, server: osmsku---kubemaster01, request: "GET /kube-ui/ HTTP/1.1", upstream: "http://172.17.77.5:9090/", host: "osmsku---kubemaster01"

10.125.16.80 - - [27/Aug/2018:21:14:40 +0000] "GET /kube-ui/ HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" "-"

在仪表板日志上,我看到了

2018/08/27 21:13:35 Creating in-cluster Heapster client

2018/08/27 21:13:35 Serving insecurely on HTTP port: 9090 
2018/08/27 21:13:38 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.  
2018/08/27 21:14:11 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.   
2018/08/27 21:14:44 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2018/08/27 21:15:17 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.

我尝试删除kube-system名称空间中的所有pod。删除了仪表板/堆,但没有任何帮助。任何想法怎么回事。或检查什么。注意,我已经升级了群集,之后一切正常。升级后我重新启动了主节点,这就是发生的情况

NAME                                           STATUS    ROLES     AGE       VERSION
osmsku---kubemaster01..local   Ready     master    140d      v1.11.2
osmsku---kubemaster02..local   Ready     <none>    140d      v1.11.2
osmsku---kubenode01..local     Ready     <none>    140d      v1.11.2
osmsku---kubenode02..local     Ready     <none>    140d      v1.11.2

根据以下注释进行了更新: 77.5是docker interface ip

kubectl -n kube-system get -o wide svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE       SELECTOR
elasticsearch-logging   ClusterIP   10.110.162.147   <none>        9200/TCP         4h        k8s-app=elasticsearch-logging
heapster                ClusterIP   10.98.52.12      <none>        80/TCP           1h        k8s-app=heapster
kibana-logging          NodePort    10.99.101.8      <none>        5601:30275/TCP   4h        k8s-app=kibana-logging
kube-dns                ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP    3h        k8s-app=kube-dns
kubernetes-dashboard    NodePort    10.99.131.186    <none>        80:32264/TCP     2h        k8s-app=kubernetes-dashboard
monitoring-influxdb     ClusterIP   10.101.205.79    <none>        8086/TCP         1h        k8s-app=influxdb

kubectl get -o wide node
NAME                                           STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
osmsku--prod-kubemaster01..local   Ready     master    140d      v1.11.2   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://18.3.0
osmsku--prod-kubemaster02..local   Ready     <none>    140d      v1.11.2   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://18.3.0
osmsku--prod-kubenode01..local     Ready     <none>    140d      v1.11.2   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://18.3.0
osmsku--prod-kubenode02..local     Ready     <none>    140d      v1.11.2   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://18.3.0


   k get pods -n=kube-system
NAME                                                                   READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-qngz5                                               1/1       Running   2          2h
coredns-78fcdf6894-xdjcg                                               1/1       Running   1          2h
elasticsearch-logging-0                                                1/1       Running   2          2h
etcd-osmsku--prod-kubemaster01..local                      1/1       Running   15         2h
fluentd-es-v2.0.3-g77rm                                                1/1       Running   2          2h
fluentd-es-v2.0.3-x5bds                                                1/1       Running   3          2h
heapster-6d956577dc-d6l6k                                              1/1       Running   0          1h
kibana-logging-66fcf97dc8-57nd5                                        1/1       Running   1          2h
kube-apiserver-osmsku--prod-kubemaster01..local            1/1       Running   2          2h
kube-controller-manager-osmsku--prod-kubemaster01..local   1/1       Running   2          2h
kube-flannel-ds-4wdb7                                                  1/1       Running   3          2h
kube-flannel-ds-5g26z                                                  1/1       Running   2          2h
kube-flannel-ds-c9zss                                                  1/1       Running   3          2h
kube-flannel-ds-jbsfm                                                  1/1       Running   3          2h
kube-proxy-dzllb                                                       1/1       Running   1          2h
kube-proxy-gv2lf                                                       1/1       Running   2          2h
kube-proxy-gxd6b                                                       1/1       Running   2          2h
kube-proxy-hfwrv                                                       1/1       Running   2          2h
kube-scheduler-osmsku--prod-kubemaster01..local            1/1       Running   2          2h
kubernetes-dashboard-6bc9c6f7cb-f8g7s                                  1/1       Running   0          2h
monitoring-influxdb-cf9d95766-tkqhp                                    1/1       Running   0          1h
nginx-ingress-5659cc597-g9qg6                                          1/1       Running   0  

    2h

0 个答案:

没有答案