FailedCreatePodSandBox和kubelet,$(从属名称)Pod沙箱已更改,它将被杀死并重新创建

时间:2019-04-14 11:28:57

标签: kubernetes

我在“ Bionic Beaver” ubuntu上运行具有6个节点(cluser-master和kubernetes slave0-4)的kubernetes集群,我正在使用Weave 要安装kubernetes,我按照https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/进行了安装,并在干净删除它之后安装了这里推荐的东西之后安装了weave(不再显示在我的pod中)

kubectl get pods --all-namespaces返回:

NAMESPACE     NAME                                     READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-g8psp                  0/1     ContainerCreating   0          77m
kube-system   coredns-fb8b8dccf-pbfr7                  0/1     ContainerCreating   0          77m
kube-system   etcd-cluster-master                      1/1     Running             5          77m
kube-system   kube-apiserver-cluster-master            1/1     Running             5          77m
kube-system   kube-controller-manager-cluster-master   1/1     Running             5          77m
kube-system   kube-proxy-72s98                         1/1     Running             5          73m
kube-system   kube-proxy-cqmdm                         1/1     Running             3          63m
kube-system   kube-proxy-hgnpj                         1/1     Running             0          69m
kube-system   kube-proxy-nhjdc                         1/1     Running             5          72m
kube-system   kube-proxy-sqvdd                         1/1     Running             5          77m
kube-system   kube-proxy-vmg9k                         1/1     Running             0          70m
kube-system   kube-scheduler-cluster-master            1/1     Running             5          76m
kube-system   kubernetes-dashboard-5f7b999d65-p7clv    0/1     ContainerCreating   0          61m
kube-system   weave-net-2brvt                          2/2     Running             0          69m
kube-system   weave-net-5wlks                          2/2     Running             16         72m
kube-system   weave-net-65qmd                          2/2     Running             16         73m
kube-system   weave-net-9x8cz                          2/2     Running             9          63m
kube-system   weave-net-r2nhz                          2/2     Running             15         75m
kube-system   weave-net-stq8x                          2/2     Running             0          70m

如果我选择kubectl describe $(kube dashboard pod name) --namespace=kube-system,它将返回:

NAME                                    READY   STATUS              RESTARTS   AGE
kubernetes-dashboard-5f7b999d65-p7clv   0/1     ContainerCreating   0          64m
rock64@cluster-master:~$
rock64@cluster-master:~$ kubectl describe pods kubernetes-dashboard-5f7b999d65-p7clv --namespace=kube-system
Name:               kubernetes-dashboard-5f7b999d65-p7clv
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               kubernetes-slave1/10.0.0.215
Start Time:         Sun, 14 Apr 2019 10:20:42 +0000
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=5f7b999d65
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/kubernetes-dashboard-5f7b999d65
Containers:
  kubernetes-dashboard:
    Container ID:
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-znrkr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-znrkr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-znrkr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                     From                        Message
  ----     ------                  ----                    ----                        -------
  Normal   Scheduled               64m                     default-scheduler           Successfully assigned kube-system/kubernetes-dashboard-5f7b999d65-p7clv to kubernetes-slave1
  Warning  FailedCreatePodSandBox  64m                     kubelet, kubernetes-slave1  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "loopback" in path [/opt/cni/bin], failed to clean up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to teardown pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "portmap" in path [/opt/cni/bin]]
  Normal   SandboxChanged          59m (x25 over 64m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          49m (x18 over 53m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          46m (x13 over 48m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          24m (x94 over 44m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          4m12s (x26 over 9m52s)  kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.```

2 个答案:

答案 0 :(得分:1)

  

failed to find plugin "loopback" in path [/opt/cni/bin]

当有用的信息试图向您解释时,您似乎安装了拙劣的CNI。每当您看到FailedCreatePodSandBoxSandboxChanged时,它总是(?)与CNI故障有关。

非常简短的版本是抓取the latest CNI plugins软件包,将其解压缩到/opt/cni/bin中,确保它们是可执行的,然后重新启动...呃,很可能是机器,但肯定是有问题的Pod,最有可能kubelet也是如此。

p.s。 conducing a little searching会让您在SO上度过美好的时光,因为这是一个非常常见问题

答案 1 :(得分:0)

FWIW,我在 Fedora 34 上遇到了类似的问题。原因是 CNI 插件被解压到 /usr/libexec/cni,但是“containerd config default”生成了带有 bin_dir = "/opt/cni/bin". 的配置文件配置文件让 bin_dir = "/usr/libexec/cni" 和重新启动 containerd 有帮助。

对于 Fedora,另见 https://bugzilla.redhat.com/show_bug.cgi?id=1731597