kube init:为控制平面节点写入Crisocket信息时出错:等待条件超时

时间:2020-07-02 17:00:34

标签: kubernetes

我已经尝试了很多方法来使其正常工作。我已经禁用了代理设置(删除了所有环境变量),并尝试使用了docker,containered和crio。我尝试使用serviceSubnet: "11.96.0.0/12"authorization-mode: "None"。以下是一些相关的详细信息和日志。任何帮助将不胜感激。

环境

ftps_proxy=http://proxy:3128
XDG_SESSION_ID=5
HOSTNAME=my-hostname
SHELL=/bin/bash
TERM=xterm-256color
HISTSIZE=1000
SYSTEMCTL_SKIP_REDIRECT=1
USER=root
http_proxy=http://proxy:3128
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
SUDO_UID=68247485
ftp_proxy=http://proxy:3128
USERNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_US.UTF-8
https_proxy=http://proxy:3128
SHLVL=1
SUDO_COMMAND=/usr/bin/su
HOME=/root
LC_TERMINAL_VERSION=3.3.11
no_proxy=***REDACTED****
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
SUDO_GID=39999
LC_TERMINAL=iTerm2
_=/usr/bin/env

  • Kubernetes版本(使用kubectl version):
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:45:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • 云提供商或硬件配置

来自cat /proc/cpuinfo

processor   : 7
vendor_id   : GenuineIntel
cpu family  : 6
model       : 85
model name  : Intel(R) Xeon(R) Platinum 8167M CPU @ 2.00GHz
stepping    : 4
microcode   : 0x1
cpu MHz     : 1995.315
cache size  : 16384 KB
physical id : 0
siblings    : 8
core id     : 3
cpu cores   : 4
apicid      : 7
initial apicid  : 7
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke md_clear
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips    : 3990.63
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual

  • 操作系统(例如来自/ etc / os-release):

NAME="Oracle Linux Server"
VERSION="7.8"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.8"
PRETTY_NAME="Oracle Linux Server 7.8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:8:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.8
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.8
  • 内核(例如uname -a):
Linux ********* REDACTED ********* 4.14.35-2020.el7uek.x86_64 #2 SMP Fri May 15 12:40:03 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
  • 其他KUBECONFIG=/etc/kubernetes/admin.conf kubectl get po -A的输出是Unable to connect to the server: Forbidden

tail -n 100 /var/log/messages | grep kubelet的输出是:


Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.245860   16845 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.268437   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.367850   16845 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.368580   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.468741   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.568945   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.669102   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.769265   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.869423   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:49 my-host kubelet: E0702 02:31:49.969613   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.069779   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.169952   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.270162   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.370314   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.470518   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.570690   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.670844   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.771025   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.871242   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:50 my-host kubelet: E0702 02:31:50.971404   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.071568   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.171749   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.271907   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.372112   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.472280   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.572449   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.672617   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.769715   16845 event.go:269] Unable to write event: 'Patch https://10.41.11.150:6443/api/v1/namespaces/default/events/my-host.161de4f886249d98: Forbidden' (may retry after sleeping)
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.772793   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.872998   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.911040   16845 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://10.41.11.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/my-host?timeout=10s: Forbidden
Jul  2 02:31:51 my-host kubelet: E0702 02:31:51.973186   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:52 my-host kubelet: E0702 02:31:52.073314   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:52 my-host kubelet: E0702 02:31:52.173498   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:52 my-host kubelet: E0702 02:31:52.273690   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:52 my-host kubelet: E0702 02:31:52.373853   16845 kubelet.go:2267] node "my-host" not found
Jul  2 02:31:52 my-host kubelet: E0702 02:31:52.474005   16845 kubelet.go:2267] node "my-host" not found

发生了什么事?

我跑了kubeadm init,取而代之的是:

kubeadm init --v=5

I0702 02:19:47.181576   16698 initconfiguration.go:103] detected and using CRI socket: /run/containerd/containerd.sock
I0702 02:19:47.181764   16698 interface.go:400] Looking for default routes with IPv4 addresses
I0702 02:19:47.181783   16698 interface.go:405] Default route transits interface "ens3"
I0702 02:19:47.181863   16698 interface.go:208] Interface ens3 is up
I0702 02:19:47.181909   16698 interface.go:256] Interface "ens3" has 1 addresses :[10.41.11.150/28].
I0702 02:19:47.181929   16698 interface.go:223] Checking addr  10.41.11.150/28.
I0702 02:19:47.181939   16698 interface.go:230] IP found 10.41.11.150
I0702 02:19:47.181949   16698 interface.go:262] Found valid IPv4 address 10.41.11.150 for interface "ens3".
I0702 02:19:47.181958   16698 interface.go:411] Found active IP 10.41.11.150
I0702 02:19:47.182015   16698 version.go:183] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
W0702 02:19:47.660545   16698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
I0702 02:19:47.660897   16698 checks.go:577] validating Kubernetes and kubeadm version
I0702 02:19:47.660931   16698 checks.go:166] validating if the firewall is enabled and active
I0702 02:19:47.670323   16698 checks.go:201] validating availability of port 6443
I0702 02:19:47.670487   16698 checks.go:201] validating availability of port 10259
I0702 02:19:47.670518   16698 checks.go:201] validating availability of port 10257
I0702 02:19:47.670552   16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0702 02:19:47.670567   16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0702 02:19:47.670578   16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0702 02:19:47.670587   16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0702 02:19:47.670597   16698 checks.go:432] validating if the connectivity type is via proxy or direct
I0702 02:19:47.670632   16698 checks.go:471] validating http connectivity to first IP address in the CIDR
I0702 02:19:47.670654   16698 checks.go:471] validating http connectivity to first IP address in the CIDR
I0702 02:19:47.670662   16698 checks.go:102] validating the container runtime
I0702 02:19:47.679912   16698 checks.go:376] validating the presence of executable crictl
I0702 02:19:47.679978   16698 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0702 02:19:47.680030   16698 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0702 02:19:47.680065   16698 checks.go:649] validating whether swap is enabled or not
I0702 02:19:47.680141   16698 checks.go:376] validating the presence of executable conntrack
I0702 02:19:47.680166   16698 checks.go:376] validating the presence of executable ip
I0702 02:19:47.680190   16698 checks.go:376] validating the presence of executable iptables
I0702 02:19:47.680216   16698 checks.go:376] validating the presence of executable mount
I0702 02:19:47.680245   16698 checks.go:376] validating the presence of executable nsenter
I0702 02:19:47.680270   16698 checks.go:376] validating the presence of executable ebtables
I0702 02:19:47.680292   16698 checks.go:376] validating the presence of executable ethtool
I0702 02:19:47.680309   16698 checks.go:376] validating the presence of executable socat
I0702 02:19:47.680327   16698 checks.go:376] validating the presence of executable tc
I0702 02:19:47.680343   16698 checks.go:376] validating the presence of executable touch
I0702 02:19:47.680365   16698 checks.go:520] running all checks
I0702 02:19:47.690210   16698 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0702 02:19:47.691000   16698 checks.go:618] validating kubelet version
I0702 02:19:47.754775   16698 checks.go:128] validating if the service is enabled and active
I0702 02:19:47.764254   16698 checks.go:201] validating availability of port 10250
I0702 02:19:47.764336   16698 checks.go:201] validating availability of port 2379
I0702 02:19:47.764386   16698 checks.go:201] validating availability of port 2380
I0702 02:19:47.764435   16698 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0702 02:19:47.772992   16698 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.5
I0702 02:19:47.782489   16698 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.5
I0702 02:19:47.790023   16698 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.5
I0702 02:19:47.797925   16698 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.5
I0702 02:19:47.805928   16698 checks.go:838] image exists: k8s.gcr.io/pause:3.2
I0702 02:19:47.814148   16698 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0
I0702 02:19:47.821926   16698 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.7
I0702 02:19:47.821971   16698 kubelet.go:64] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0702 02:19:47.952580   16698 certs.go:103] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [my-host kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.41.11.150]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0702 02:19:48.880369   16698 certs.go:103] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0702 02:19:49.372445   16698 certs.go:103] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [my-host localhost] and IPs [10.41.11.150 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [my-host localhost] and IPs [10.41.11.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0702 02:19:50.467723   16698 certs.go:69] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0702 02:19:50.617181   16698 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0702 02:19:50.763578   16698 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0702 02:19:51.169983   16698 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0702 02:19:51.328280   16698 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0702 02:19:51.469999   16698 manifests.go:91] [control-plane] getting StaticPodSpecs
I0702 02:19:51.470375   16698 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0702 02:19:51.470394   16698 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0702 02:19:51.470400   16698 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0702 02:19:51.476683   16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0702 02:19:51.476735   16698 manifests.go:91] [control-plane] getting StaticPodSpecs
W0702 02:19:51.476802   16698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0702 02:19:51.477044   16698 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0702 02:19:51.477062   16698 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0702 02:19:51.477068   16698 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0702 02:19:51.477095   16698 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0702 02:19:51.477101   16698 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0702 02:19:51.478030   16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0702 02:19:51.478061   16698 manifests.go:91] [control-plane] getting StaticPodSpecs
W0702 02:19:51.478146   16698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0702 02:19:51.478368   16698 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0702 02:19:51.479022   16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0702 02:19:51.479773   16698 local.go:72] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0702 02:19:51.479799   16698 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.502800 seconds
I0702 02:20:05.985260   16698 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0702 02:20:05.998189   16698 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
I0702 02:20:06.006321   16698 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0702 02:20:06.006340   16698 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "my-host" as an annotation
[kubelet-check] Initial timeout of 40s passed.
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadconfig.go:129
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:203
runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase upload-config/kubelet
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:203
runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1357

2 个答案:

答案 0 :(得分:0)

听起来您可能需要清理节点。日志文件指示kubeadm无法与etcd通信,这可能是由于某些现有iptables规则或主机名不匹配所致。您可以尝试:

sudo swapoff -a 
sudo kubeadm reset
sudo rm -rf /var/lib/cni/
sudo rm -rf /var/lib/cni/
sudo systemctl daemon-reload
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

然后重新运行kubeadm init

描述了类似的问题here

答案 1 :(得分:0)

我使用以下脚本完全删除现有的 Kubernetes 集群,包括运行 Docker 容器

sudo kubeadm reset

sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/

sudo systemctl daemon-reload

sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`