Kubernetes-Pod没有从死节点中退出

时间:2020-03-23 03:46:38

标签: kubernetes google-kubernetes-engine kubernetes-pod kubernetes-deployment

我有一个带有kubeadm init的kube集群设置(大多数是默认设置)。一切正常,但事实是,如果我的一个节点在Pod运行时脱机,则Pod会无限期地处于Running状态。根据我的阅读,它们应该进入UnknownFailure状态,并且--pod-eviction-timeout(默认5m)之后,应该将它们重新安排到另一个运行状况良好的节点上。

Node 7离线20分钟以上后,这里是我的豆荚(我也将其放置了两天以上,没有重新安排时间):

kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
workshop-30000-77b95f456c-sxkp5        1/1     Running   0          20m   REDACTED       node7   <none>           <none>
workshop-operator-657b45b6b8-hrcxr     2/2     Running   0          23m   REDACTED       node7   <none>           <none>

kubectl get deployments -o wide
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES                                                                                          SELECTOR
deployment.apps/workshop-30000      0/1     1            0           21m   workshop-ubuntu    REDACTED                                                            client=30000
deployment.apps/workshop-operator   0/1     1            0           17h   ansible,operator   REDACTED   name=workshop-operator

您可以看到Pod仍标记为Running,而其部署中却有Ready: 0/1

这是我的节点:

kubectl get nodes -o wide
NAME                STATUS     ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
kubernetes-master   Ready      master   34d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.2
kubernetes-worker   NotReady   <none>   34d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-29-generic   docker://19.3.2
node3               NotReady   worker   21d    v1.17.3   REdACTED      <none>        Ubuntu 19.10   5.3.0-40-generic   docker://19.3.2
node4               Ready      <none>   19d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-40-generic   docker://19.3.2
node6               NotReady   <none>   5d7h   v1.17.4   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.6
node7               NotReady   <none>   5d6h   v1.17.4   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.6

问题可能是什么?我所有的容器都有准备就绪和活跃度探针。我曾尝试搜索文档和其他地方的内容,但找不到能解决此问题的任何方法。

当前,如果某个节点发生故障,我可以将其上的Pod重新安排到活动节点的唯一方法是使用--force和--graceperiod = 0手动删除它们,这会打败一些Kubernetes的主要目标:自动化和自我修复。

根据文档:https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime If a node dies or is disconnected from the rest of the cluster, Kubernetes applies a policy for setting the phase of all Pods on the lost node to Failed.

----------额外信息---------------

kubectl describe pods workshop-30000-77b95f456c-sxkp5
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned workshop-operator/workshop-30000-77b95f456c-sxkp5 to node7
  Normal   Pulling    37m                kubelet, node7     Pulling image "REDACTED"
  Normal   Pulled     37m                kubelet, node7     Successfully pulled image "REDACTED"
  Normal   Created    37m                kubelet, node7     Created container workshop-ubuntu
  Normal   Started    37m                kubelet, node7     Started container workshop-ubuntu
  Warning  Unhealthy  36m (x2 over 36m)  kubelet, node7     Liveness probe failed: Get http://REDACTED:8080/healthz: dial tcp REDACTED:8000: connect: connection refused
  Warning  Unhealthy  36m (x3 over 36m)  kubelet, node7     Readiness probe failed: Get http://REDACTED:8000/readyz: dial tcp REDACTED:8000: connect: connection refused

我相信这些活跃性和就绪性探查失败仅仅是由于启动缓慢。节点出现故障后(似乎是在37分钟前),似乎没有检查活动/就绪状态。

这是具有以下版本的自托管群集:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

感谢所有提供帮助的人。

编辑: 最初使用kubeadm引导时,它要么是错误,要么是潜在的错误配置。完全重新安装kubernetes集群并将其从1.17.4更新到1.18解决了该问题,现在可以从死节点重新安排Pod的运行时间。

1 个答案:

答案 0 :(得分:1)

在Kubernetes 1.13版中将TaintBasedEvictions功能标志设置为true(默认)后,您可以在容忍度内在其规范内设置吊舱驱逐时间。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      tolerations:
      - key: "node.kubernetes.io/unreachable"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 2
      - key: "node.kubernetes.io/not-ready"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 2
      containers:
      - image: busybox
        command:
        - sleep
        - "3600"
        imagePullPolicy: IfNotPresent
        name: busybox
      restartPolicy: Always

如果在300秒(默认值)或2秒(容差设置)之后未重新安排Pod的时间,则需要kubectl delete node,这将触发节点上Pod的重新计划。