我无法运行任何kubectl
命令,并且我相信这是由于apiserver-etcd-client证书过期所致。
$ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not '
Not Before: Jun 25 17:28:17 2018 GMT
Not After : Jun 25 17:28:18 2019 GMT
失败的apiserver容器中的日志显示:
Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420363900 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: getsockopt: connection refused)
我正在使用kubeadm 1.10,并且想升级到1.14。我能够续订issue 581在GitHub上描述的几个过期的证书。按照说明更新了/etc/kubernetes/pki
中的以下密钥和证书:
apiserver
apiserver-kubelet-client
front-proxy-client
接下来,我尝试了:
kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
kubeadm.yaml
文件所在的位置:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 172.XX.XX.XXX
kubernetesVersion: v1.10.5
但是它返回:
failure loading apiserver-etcd-client certificate: the certificate has expired
此外,在/etc/kubernetes/pki/etcd
目录中,除了ca
证书和密钥之外,所有其余的证书和密钥都已失效。
Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
系统启动脚本:
sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
Docs: http://kubernetes.io/docs/
Main PID: 9422 (kubelet)
Tasks: 13
Memory: 47.0M
CGroup: /system.slice/kubelet.service
└─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989 9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644 9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012 9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused
答案 0 :(得分:2)
背景:kubectl利用一个名为config
的文件(在CentOS中的位置:/ $ USER / .kube / config)向api服务器标识自己(您是用户)。该文件是admin.conf
(在CentOS中的位置:/etc/kubernetes/admin.conf。
现在应该从1.10升级到1.14并更新已经过期的证书集群中的现有证书,只是不使处理过程复杂化而已!
仅供参考,您应该一次将一个跃点升级一次,即从1.10到1.11到1.12 .....检查this页面可用于其他版本。同样重要的是在从一跳升级到下一跳之前检查changelogs并执行 mandatories 。
仅返回续订证书kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
是不够的,请执行kubeadm --config kubeadm.yaml alpha phase certs all
请注意,按照上述操作,由于ca.crt现已更改,因此您需要在所有节点中执行kubeadm --config kubeadm.yaml alpha phase kubeconfig all
。 Ref
将来您可能会考虑使用cert rotation
答案 1 :(得分:0)
sudo -sE #switch to root
# Check certs on master to see expiration dates
echo -n /etc/kubernetes/pki/{apiserver,apiserver-kubelet-client,apiserver-etcd-client,front-proxy-client,etcd/healthcheck-client,etcd/peer,etcd/server}.crt | xargs -d ' ' -I {} bash -c "ls -hal {} && openssl x509 -in {} -noout -enddate"
# Move existing keys/config files so they can be recreated
mv /etc/kubernetes/pki/apiserver.key{,.old}
mv /etc/kubernetes/pki/apiserver.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.key{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.key{,.old}
mv /etc/kubernetes/pki/front-proxy-client.crt{,.old}
mv /etc/kubernetes/pki/front-proxy-client.key{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.crt{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.key{,.old}
mv /etc/kubernetes/kubelet.conf{,.old}
mv /etc/kubernetes/admin.conf{,.old}
mv /etc/kubernetes/controller-manager.conf{,.old}
mv /etc/kubernetes/scheduler.conf{,.old}
# Regenerate keys and config files
kubeadm alpha phase certs apiserver --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-etcd-client --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
kubeadm alpha phase certs etcd-healthcheck-client
kubeadm alpha phase certs etcd-peer
kubeadm alpha phase certs etcd-server
kubeadm alpha phase kubeconfig all --config /etc/kubernetes/kubeadm.yaml
# then need to restart the kubelet and services, but for the master probably best to just reboot