我正在跟踪https://v1-12.docs.kubernetes.io/docs/setup/independent/high-availability/来设置一个高可用性集群
三个主控:10.240.0.4(kb8-master1),10.240.0.33(kb8-master2),10.240.0.75(kb8-master3) LB:10.240.0.16(haproxy)
我已经按照指示设置了kb8-master1并将以下文件复制到其余的master(kb8-master2和kb8-master3)中
在kb8-master2中
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
After that I started to follow following commands in the kb8-master2
> `sudo kubeadm alpha phase certs all --config kubeadm-config.yaml`
Output:-
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kb8-master2 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kb8-master2 localhost] and IPs [10.240.0.33 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kb8-master2 kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.33]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`
Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
>`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`
Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
>`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`
Output:-
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
>`sudo systemctl start kubelet`
>`export KUBECONFIG=/etc/kubernetes/admin.conf`
>`sudo kubectl exec -n kube-system etcd-kb8-master1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=protocol://10.240.0.4:2379 member add kb8-master2 https://10.240.0.33:2380`
输出:- 与服务器localhost:8080的连接被拒绝-您指定了正确的主机或端口吗?
注意:我现在可以在kb8-master2中运行kubectl get po -n kube-system来查看pods
sudo kubeadm alpha phase etcd local --config kubeadm-config.yaml
无输出
sudo kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
输出:-
一个kubeconfig文件“ /etc/kubernetes/admin.conf”已经存在,但是API服务器URL错误
我真的被困在这里。进一步
我在kb8-master2中使用的kubeadm-config.yaml文件下面
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
kubernetesVersion: v1.12.2
apiServerCertSANs:
- "10.240.0.16"
controlPlaneEndpoint: "10.240.0.16:6443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"
advertise-client-urls: "https://10.240.0.33:2379"
listen-peer-urls: "https://10.240.0.33:2380"
initial-advertise-peer-urls: "https://10.240.0.33:2380"
initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
initial-cluster-state: existing
serverCertSANs:
- kb8-master2
- 10.240.0.33
peerCertSANs:
- kb8-master2
- 10.240.0.33
networking:
podSubnet: "10.244.0.0/16"
有人遇到过同样的问题。我完全被困在这里
答案 0 :(得分:0)
您是否有任何理由要单独执行所有init和join任务,而不是直接使用init和join? Kubeadm应该非常简单地使用。
创建initConfiguration
和clusterConfiguraton
清单,并将它们放在Master的同一文件中。然后创建一个nodeConfiguration
清单,并将其放在节点上的文件中。然后在主服务器上运行kubeadm init --config=/location/master.yml
,然后在节点上运行kubeadm join --token 1.2.3.4:6443
与其单步调试文档以及如何专门针对其子任务进行联接,不如通过this document逐步使用其自动化功能来轻松构建集群。