我完全拆卸了v1.13.1集群,现在正在运行带有calico cni v3.8.0的v1.15.0。所有Pod正在运行:
[gms@thalia0 ~]$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-59f54d6bbc-2mjxt 1/1 Running 0 7m23s
calico-node-57lwg 1/1 Running 0 7m23s
coredns-5c98db65d4-qjzpq 1/1 Running 0 8m46s
coredns-5c98db65d4-xx2sh 1/1 Running 0 8m46s
etcd-thalia0.ahc.umn.edu 1/1 Running 0 8m5s
kube-apiserver-thalia0.ahc.umn.edu 1/1 Running 0 7m46s
kube-controller-manager-thalia0.ahc.umn.edu 1/1 Running 0 8m2s
kube-proxy-lg4cn 1/1 Running 0 8m46s
kube-scheduler-thalia0.ahc.umn.edu 1/1 Running 0 7m40s
但是,当我查看端点时,会得到以下信息:
[gms@thalia0 ~]$ kubectl get ep --namespace=kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 9m46s
kube-dns 192.168.16.194:53,192.168.16.195:53,192.168.16.194:53 + 3 more... 9m30s
kube-scheduler <none> 9m46s
如果我查看apiserver的日志,则会遇到大量TLS握手错误,如下所示:
I0718 19:35:17.148852 1 log.go:172] http: TLS handshake error from 10.x.x.160:45042: remote error: tls: bad certificate
I0718 19:35:17.158375 1 log.go:172] http: TLS handshake error from 10.x.x.159:53506: remote error: tls: bad certificate
这些IP地址来自先前群集中的节点。我已经删除了它们,并在包括主节点在内的所有节点上完成了kubeadm reset
,所以我不知道为什么它们会出现。我认为这就是controller-manager
和scheduler
的端点显示为<none>
的原因。
答案 0 :(得分:0)
为了完全清除群集,您应该执行以下操作:
1)重置群集
$sudo kubeadm reset (or use appropriate to your cluster command)
2)使用配置擦除本地目录
$rm -rf .kube/
3)删除/etc/kubernetes/
$sudo rm -rf /etc/kubernetes/
4)重点之一是摆脱以前的etc状态配置。
$sudo rm -rf /var/lib/etcd/