我无法在1.16.7中升级集群Kubernetes(与kubespray 2.12.3一起安装) 到Kubernetes 1.17.7(kubespray 2.13.2)中的集群 可以升级并升级第一个主服务器,但始终无法完成任务: 任务[kubernetes / master:kubeadm |升级其他大师]
该集群已安装在VM Redhat 7.3上,我将Ansible 2.9用于kubespray 2.12.3。
我遇到以下错误,但我读到DNS警告并不重要。
你能帮我吗?
fatal: [gate7430]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "600`s", "600s", "/usr/local/bin/kubeadm", "upgrade", "apply", "-y", "v1.17.7", "--config=/etc/kubernetes/kubeadm-config.yaml", "--ignore-preflight-errors=all", "--allow-experimental-upgrades", "--allow-release-candidate-upgrades", "--etcd-upgrade=false", "--certificate-renewal=true", "--force"], "delta": "0:10:00.006944", "end": "2020-06-30 15:15:40.336116", "failed_when_result": true, "msg": "non-zero return code", "rc": 124, "start": "2020-06-30 15:05:40.329172", "stderr": "W0630 15:05:40.390899 15187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]\nW0630 15:05:40.391113 15187 validation.go:28] Cannot validate kube-proxy config - no validator is available\nW0630 15:05:40.391130 15187 validation.go:28] Cannot validate kubelet config - no validator is available\nW0630 15:05:40.407567 15187 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!\nW0630 15:05:40.411008 15187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]\nW0630 15:05:40.411107 15187 validation.go:28] Cannot validate kubelet config - no validator is available\nW0630 15:05:40.411121 15187 validation.go:28] Cannot validate kube-proxy config - no validator is available\n\t[WARNING ControlPlaneNodesReady]: there are NotReady control-planes in the cluster: [gate7430]\nW0630 15:10:42.214839 15187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"", "stderr_lines": ["W0630 15:05:40.390899 15187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]", "W0630 15:05:40.391113 15187 validation.go:28] Cannot validate kube-proxy config - no validator is available", "W0630 15:05:40.391130 15187 validation.go:28] Cannot validate kubelet config - no validator is available", "W0630 15:05:40.407567 15187 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!", "W0630 15:05:40.411008 15187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]", "W0630 15:05:40.411107 15187 validation.go:28] Cannot validate kubelet config - no validator is available", "W0630 15:05:40.411121 15187 validation.go:28] Cannot validate kube-proxy config - no validator is available", "\t[WARNING ControlPlaneNodesReady]: there are NotReady control-planes in the cluster: [gate7430]", "W0630 15:10:42.214839 15187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""], "stdout": "[upgrade/config] Making sure the configuration is correct:\n[preflight] Running pre-flight checks.\n[upgrade] Making sure the cluster is healthy:\n[upgrade/version] You have chosen to change the cluster version to \"v1.17.7\"\n[upgrade/versions] Cluster version: v1.16.7\n[upgrade/versions] kubeadm version: v1.17.7\n[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]\n[upgrade/prepull] Prepulling image for component kube-scheduler.\n[upgrade/prepull] Prepulling image for component kube-apiserver.\n[upgrade/prepull] Prepulling image for component kube-controller-manager.\n[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager\n[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver\n[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler\n[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler\n[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver\n[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager\n[upgrade/prepull] Prepulled image for component kube-apiserver.\n[upgrade/prepull] Prepulled image for component kube-scheduler.\n[upgrade/prepull] Prepulled image for component kube-controller-manager.\n[upgrade/prepull] Successfully prepulled the images for all the control plane components\n[upgrade/apply] Upgrading your Static Pod-hosted control plane to version \"v1.17.7\"...\nStatic pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2\nStatic pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46\nStatic pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48\n[upgrade/staticpods] Writing new Static Pod manifests to \"/etc/kubernetes/tmp/kubeadm-upgraded-manifests994032600\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[upgrade/staticpods] Preparing for \"kube-apiserver\" upgrade\n[upgrade/staticpods] Renewing apiserver certificate\n[upgrade/staticpods] Renewing apiserver-kubelet-client certificate\n[upgrade/staticpods] Renewing front-proxy-client certificate\n[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-apiserver.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-apiserver.yaml\"\n[upgrade/staticpods] Waiting for the kubelet to restart the component\n[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)\nStatic pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2\nStatic pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2\nStatic pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2\nStatic pod: kube-apiserver-gate7430 hash: 29b84d85a8d16a0775b9db4d98c8c013\n[apiclient] Found 2 Pods for label selector component=kube-apiserver\n[upgrade/staticpods] Component \"kube-apiserver\" upgraded successfully!\n[upgrade/staticpods] Preparing for \"kube-controller-manager\" upgrade\n[upgrade/staticpods] Renewing controller-manager.conf certificate\n[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-controller-manager.yaml\"\n[upgrade/staticpods] Waiting for the kubelet to restart the component\n[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)\nStatic pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46\nStatic pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46\nStatic pod: kube-controller-manager-gate7430 hash: ce77b3c8652cec66a90e424f05663b1e\n[apiclient] Found 2 Pods for label selector component=kube-controller-manager\n[upgrade/staticpods] Component \"kube-controller-manager\" upgraded successfully!\n[upgrade/staticpods] Preparing for \"kube-scheduler\" upgrade\n[upgrade/staticpods] Renewing scheduler.conf certificate\n[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-scheduler.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-scheduler.yaml\"\n[upgrade/staticpods] Waiting for the kubelet to restart the component\n[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)\nStatic pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48\nStatic pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48\nStatic pod: kube-scheduler-gate7430 hash: 07fbe6c92209460e21d2f4ac86b256b2\n[apiclient] Found 2 Pods for label selector component=kube-scheduler\n[upgrade/staticpods] Component \"kube-scheduler\" upgraded successfully!\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster\n[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[addons]: Migrating CoreDNS Corefile\n[addons] Applied essential addon: CoreDNS", "stdout_lines": ["[upgrade/config] Making sure the configuration is correct:", "[preflight] Running pre-flight checks.", "[upgrade] Making sure the cluster is healthy:", "[upgrade/version] You have chosen to change the cluster version to \"v1.17.7\"", "[upgrade/versions] Cluster version: v1.16.7", "[upgrade/versions] kubeadm version: v1.17.7", "[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]", "[upgrade/prepull] Prepulling image for component kube-scheduler.", "[upgrade/prepull] Prepulling image for component kube-apiserver.", "[upgrade/prepull] Prepulling image for component kube-controller-manager.", "[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager", "[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver", "[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler", "[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler", "[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver", "[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager", "[upgrade/prepull] Prepulled image for component kube-apiserver.", "[upgrade/prepull] Prepulled image for component kube-scheduler.", "[upgrade/prepull] Prepulled image for component kube-controller-manager.", "[upgrade/prepull] Successfully prepulled the images for all the control plane components", "[upgrade/apply] Upgrading your Static Pod-hosted control plane to version \"v1.17.7\"...", "Static pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2", "Static pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46", "Static pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48", "[upgrade/staticpods] Writing new Static Pod manifests to \"/etc/kubernetes/tmp/kubeadm-upgraded-manifests994032600\"", "[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"", "[upgrade/staticpods] Preparing for \"kube-apiserver\" upgrade", "[upgrade/staticpods] Renewing apiserver certificate", "[upgrade/staticpods] Renewing apiserver-kubelet-client certificate", "[upgrade/staticpods] Renewing front-proxy-client certificate", "[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-apiserver.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-apiserver.yaml\"", "[upgrade/staticpods] Waiting for the kubelet to restart the component", "[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)", "Static pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2", "Static pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2", "Static pod: kube-apiserver-gate7430 hash: e1d41b7728deb2dfb1f69a175656a9d2", "Static pod: kube-apiserver-gate7430 hash: 29b84d85a8d16a0775b9db4d98c8c013", "[apiclient] Found 2 Pods for label selector component=kube-apiserver", "[upgrade/staticpods] Component \"kube-apiserver\" upgraded successfully!", "[upgrade/staticpods] Preparing for \"kube-controller-manager\" upgrade", "[upgrade/staticpods] Renewing controller-manager.conf certificate", "[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-controller-manager.yaml\"", "[upgrade/staticpods] Waiting for the kubelet to restart the component", "[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)", "Static pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46", "Static pod: kube-controller-manager-gate7430 hash: a8c2b827cdee2fb243c7e193d7a64e46", "Static pod: kube-controller-manager-gate7430 hash: ce77b3c8652cec66a90e424f05663b1e", "[apiclient] Found 2 Pods for label selector component=kube-controller-manager", "[upgrade/staticpods] Component \"kube-controller-manager\" upgraded successfully!", "[upgrade/staticpods] Preparing for \"kube-scheduler\" upgrade", "[upgrade/staticpods] Renewing scheduler.conf certificate", "[upgrade/staticpods] Moved new manifest to \"/etc/kubernetes/manifests/kube-scheduler.yaml\" and backed up old manifest to \"/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-30-15-10-41/kube-scheduler.yaml\"", "[upgrade/staticpods] Waiting for the kubelet to restart the component", "[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)", "Static pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48", "Static pod: kube-scheduler-gate7430 hash: a0b6f2dca2fdda5eb0b21c9279067c48", "Static pod: kube-scheduler-gate7430 hash: 07fbe6c92209460e21d2f4ac86b256b2", "[apiclient] Found 2 Pods for label selector component=kube-scheduler", "[upgrade/staticpods] Component \"kube-scheduler\" upgraded successfully!", "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace", "[kubelet] Creating a ConfigMap \"kubelet-config-1.17\" in namespace kube-system with the configuration for the kubelets in the cluster", "[kubelet-start] Downloading configuration for the kubelet from the \"kubelet-config-1.17\" ConfigMap in the kube-system namespace", "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"", "[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials", "[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token", "[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster", "[addons]: Migrating CoreDNS Corefile", "[addons] Applied essential addon: CoreDNS"]}