我正在Hyper-V的Vm上运行K8s master(ubuntu 16.04)和node(ubuntu 16.04),也无法加入节点,也没有coredns pod就绪。
在k8s Worker节点上:
ADMIN1 @ POC-K8S节点1:〜$须藤kubeadm加入192.168.137.2:6443 --token s03usq.lrz343lolmrz00lf --discovery令牌-CA-CERT-SHA256散列:5c6b88a78e7b303debda447fa6f7fb48e3746bedc07dc2a518fbc80d48f37ba4 - 忽略预检错误=所有
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[WARNING Port-10250]: Port 10250 is in use
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
admin1 @ POC-k8s-node1:〜$ journalctl -u kubelet -f
Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.784713 55491 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Unauthorized
Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.827982 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.928413 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.988489 55491 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Unauthorized
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.029295 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.129571 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.187178 55491 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Unauthorized
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.230227 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.330777 55491 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.386758 55491 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Unauthorized
Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.431420 55491 kubelet.go:2267] node "poc-k8s-node1" not found
root @ POC-k8s-node1:/ home / admin1#journalctl -xe -f
Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.670520 75467 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Unauthorized
Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.691050 75467 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.791249 75467 kubelet.go:2267] node "poc-k8s-node1" not found
Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.866004
在K8s Master上: root @ POC-k8s-master:〜#kubeadm配置映像拉
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.16.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.16.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.16.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.16.3
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.15-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.2
root@POC-k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root @ POC-k8s-master:〜#sysctl net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables = 1
root @ POC-k8s-master:〜#kubectl获取容器-所有命名空间
NAMESPACE NAME READY STATUS RESTARTS AGE
*****kube-system coredns-5644d7b6d9-7xk42 0/1 Pending 0 91s
kube-system coredns-5644d7b6d9-mbrlx 0/1 Pending 0 91s*****
kube-system etcd-poc-k8s-master 1/1 Running 0 51s
kube-system kube-apiserver-poc-k8s-master 1/1 Running 0 32s
kube-system kube-controller-manager-poc-k8s-master 1/1 Running 0 47s
kube-system kube-proxy-xqb2d 1/1 Running 0 91s
kube-system kube-scheduler-poc-k8s-master 1/1 Running 0 38s
root @ POC-k8s-master:〜#kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
答案 0 :(得分:2)
似乎您使用的是k8s版本1.16,并且daemonset API组更改为apps/v1
将链接更新为此: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
还有一个关于此的问题: https://github.com/kubernetes/website/issues/16441
答案 1 :(得分:1)
在节点上通过“ #kubeadm reset”解决了问题的第一部分,然后加入命令起作用了!由于问题的第二部分首先得到解决,因此可以解决问题,因此@Alireza David非常感谢。