kubernetes仪表板显示http:代理错误:拨打tcp [:: 1]:8080:connect:连接被拒绝

时间:2018-12-03 08:50:46

标签: kubernetes dashboard

我安装了kubeadm来部署多节点kubernetes集群。添加了两个节点。准备好了我可以使用节点端口服务运行我的应用程序。当我尝试您访问仪表板时遇到问题。 我正在按照以下步骤在此link

中安装仪表板
"netstat -p"

运行良好,并将输出保存在kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml dash-admin.yaml: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system kubectl create -f dashboard-admin.yaml nohup kubectl proxy --address="172.20.22.101" -p 443 --accept-hosts='^*$' &

当我尝试使用URL nohup.out访问该网站时,显示172.20.22.101:443/api/v1/namespaces/kube-system/services/….。 我观察到connection refused中的输出,它显示以下错误:

  

I1203 12:28:05.880828 15591 log.go:172] http:代理错误:拨打tcp   [:: 1]:8080:连接:连接被拒绝–

3 个答案:

答案 0 :(得分:0)

您没有以root或sudo权限运行它。

在使用root运行后,我遇到了此问题。我能够毫无错误地访问它。

答案 1 :(得分:0)

log.go:172] http: proxy error: dial tcp [::1]:8080: connect: connection refused –

万一遇到上述问题,请务必在未经许可的情况下尝试使用Kubernetes API。

注意:与RBAC无关。

为解决此问题,我采取了以下步骤

  1. 检查访问权限。以根用户身份执行。
  2. 如果您使用 kubectl代理连接到Kubernetes API,请确保正确配置了kubeconfig文件。或尝试 kubectl代理--kubeconfig = / path / to / dashboard-user.kubeconfig

答案 2 :(得分:-1)

这几天我遇到了类似的问题,问题的根本原因是在集群部署(3个节点)中,kuberenetes仪表板容器位于从属(非主)节点上。 问题是代理仅在本地提供(出于安全原因), 因此无法在主节点(节点3)上都无法启动仪表板控制台!

在主节点浏览器错误(kubectl代理并在此节点上执行):

"http: proxy error: dial tcp 10.32.0.2:8001: connect: connection refused"

从属节点错误(kubectl代理并在此节点上执行):

 "http: proxy error: dial tcp [::1]:8080: connect: connection refused"

解决方案:

集群Pod的状态显示仪表板Pod kubernetes-dashboard-7b544877d5-lj4xq
位于节点3上:

命名空间kubernetes-dashboard
pod kubernetes-dashboard-7b544877d5-lj4xq
节点pb-kn-node03

[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES <br>
kube-system            coredns-66bff467f8-ph7cc                     1/1     Running   1          3d17h   10.32.0.3      pb-kn-node01   <none>           <none> <br>
kube-system            coredns-66bff467f8-x22cv                     1/1     Running   1          3d17h   10.32.0.2      pb-kn-node01   <none>           <none> <br>
kube-system            etcd-pb-kn-node01                            1/1     Running   2        3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-apiserver-pb-kn-node01                  1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-controller-manager-pb-kn-node01         1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-4ngd2                             1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-7qvbj                             1/1     Running   0          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kube-system            kube-proxy-fgrcp                             1/1     Running   0          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            kube-scheduler-pb-kn-node01                  1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-fm2kd                              2/2     Running   5          3d12h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-l6rmw                              2/2     Running   1          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            weave-net-r56xk                              2/2     Running   1          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v2gqp   1/1     Running   0          2d22h   10.40.0.1      pb-kn-node02   <none>           <none> 
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-lj4xq        1/1     Running   15         2d22h   10.32.0.2      pb-kn-node03   <none>           <none>

因此,所有活动的全局吊舱都从节点3(包括仪表板)进行了重新分配。 耗尽节点后

[root @ PB-KN-Node01〜]#kubectl外流--delete-local-data --ignore-daemonsets pb-kn-node03
node / pb-kn-node03已被封锁
警告:忽略DaemonSet管理的Pod:kube-system / kube-proxy-fgrcp,kube-system / weave-net-l6rmw
node / pb-kn-node03耗尽

2分钟后...

[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES <br>
kube-system            coredns-66bff467f8-ph7cc                     1/1     Running   1          3d17h   10.32.0.3      pb-kn-node01   <none>           <none> <br>
kube-system            coredns-66bff467f8-x22cv                     1/1     Running   1          3d17h   10.32.0.2      pb-kn-node01   <none>           <none> <br>
kube-system            etcd-pb-kn-node01                            1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-apiserver-pb-kn-node01                  1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-controller-manager-pb-kn-node01         1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-4ngd2                             1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-7qvbj                             1/1     Running   0          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kube-system            kube-proxy-fgrcp                             1/1     Running   0          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            kube-scheduler-pb-kn-node01                  1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-fm2kd                              2/2     Running   5          3d12h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-l6rmw                              2/2     Running   1          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            weave-net-r56xk                              2/2     Running   1          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> 
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v2gqp   1/1     Running   0          2d22h   10.40.0.1      pb-kn-node02   <none>           <none> <br>
<b>kubernetes-dashboard   kubernetes-dashboard-7b544877d5-8ln2n        1/1     Running   0          89s     10.32.0.4      pb-kn-node01   <none>           <none> </b><br>

问题解决了,主节点上提供了kubernetes仪表板。