使用Kubespray升级k8s时,kube-proxy错误

时间:2019-05-28 09:19:25

标签: kubernetes kubeadm kubespray

我使用Kubespray 2.5.0部署了一个k8s集群1.9.5,效果很好,但是我需要对其进行升级。我使用了下一个Kubespray版本:2.6.02.7.02.8.5。只有最后一步使我在步骤kubeadm | Enable kube-proxy上出错,并输入以下stderr:

error when creating kube-proxy service account: unable to create serviceaccount: Post https://10.2.33.14:6443/api/v1/namespaces/kube-system/serviceaccounts: dial tcp 10.2.33.14:6443: connect: connection refused

我尝试使用专用的剧本用Kubespray 2.7.0重设群集,这很好,但是无论如何,当我再次开始升级时,同样的错误。

此外,我检查了该主节点中的docker容器,退出了带有kube-proxy的容器,我将日志上传到https://termbin.com/klk5上,可以看到以下内容:

1 proxier.go:540] Error removing iptables rules in ipvs proxier: error deleting chain \"KUBE-MARK-MASQ\": exit status 1: iptables: Too many links.\n","stream":"stderr","time":"2019-05-27T15:05:05.802972706Z"}
[...]
1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:129: Failed to list *core.Service: Get https://127.0.0.1:6443/api/v1/services?limit=500\u0026resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n","stream":"stderr","time":"2019-05-27T15:05:05.915223763Z"}
1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:129: Failed to list *core.Endpoints: Get https://127.0.0.1:6443/api/v1/endpoints?limit=500\u0026resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n","stream":"stderr","time":"2019-05-27T15:05:05.915232458Z"}
1 event.go:212] Unable to write event: 'Post https://127.0.0.1:6443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:6443: connect: connection refused' (may retry after sleeping)\n","stream":"stderr","time":"2019-05-27T15:05:05.915357974Z"}

这是我在group_vars中获得的一些选择:

cloud_provider: vsphere
kube_network_plugin: flannel
kube_proxy_mode: iptables
dns_mode: kubedns
resolvconf_mode: docker_dns

我的hosts.ini文件:

master-01 ansible_ssh_host=10.2.33.14
master-02 ansible_ssh_host=10.2.33.15
master-03 ansible_ssh_host=10.2.33.3
node-01 ansible_ssh_host=10.2.33.16
node-02 ansible_ssh_host=10.2.33.17
node-03 ansible_ssh_host=10.2.33.4
node-04 ansible_ssh_host=10.2.33.6
node-05 ansible_ssh_host=10.2.33.21
node-06 ansible_ssh_host=10.2.33.22
node-07 ansible_ssh_host=10.2.33.23
node-08 ansible_ssh_host=10.2.33.24
node-09 ansible_ssh_host=10.2.33.25
node-10 ansible_ssh_host=10.2.33.5
node-cassandra-01 ansible_ssh_host=10.2.33.18
node-cassandra-02 ansible_ssh_host=10.2.33.19
node-cassandra-03 ansible_ssh_host=10.2.33.20
[kube-master]
master-01
master-02
master-03
[etcd]
master-01
master-02
master-03
[kube-node]
node-01
node-02
node-03
node-04
node-05
node-06
node-07
node-08
node-09
node-10
node-cassandra-01
node-cassandra-02
node-cassandra-03
[k8s-cluster:children]
kube-node
kube-master

我希望Kubespray能够升级他自己部署的k8s集群并处理不动的配置。

我正在寻找解决该问题的帮助,我也将其发布在kubernetes Slack-channel kubespray

感谢您阅读

0 个答案:

没有答案