当容器过渡到就绪状态时,Kubernetes准备就绪探针是否应该发出事件?

时间:2018-10-24 11:34:22

标签: kubernetes

我将开始在服务中引入liveness and readiness probes,但不确定是否能成功解释{{1 }}。

kubectl给了我这样的东西:

kubectl describe pod mypod

现在,我注意到Name: myapp-5798dd798c-t7dqs Namespace: dev Node: docker-for-desktop/192.168.65.3 Start Time: Wed, 24 Oct 2018 13:22:54 +0200 Labels: app=myapp pod-template-hash=1354883547 Annotations: version: v2 Status: Running IP: 10.1.0.103 Controlled By: ReplicaSet/myapp-5798dd798c Containers: myapp: Container ID: docker://5d39cb47d2278eccd6d28c1eb35f93112e3ad103485c1c825de634a490d5b736 Image: myapp:latest Image ID: docker://sha256:61dafd0c208e2519d0165bf663e4b387ce4c2effd9237fb29fb48d316eda07ff Port: 80/TCP Host Port: 0/TCP State: Running Started: Wed, 24 Oct 2018 13:23:06 +0200 Ready: True Restart Count: 0 Liveness: http-get http://:80/healthz/live delay=0s timeout=10s period=60s #success=1 #failure=3 Readiness: http-get http://:80/healthz/ready delay=3s timeout=3s period=5s #success=1 #failure=3 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvnc2 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-gvnc2: Type: Secret (a volume populated by a Secret) SecretName: default-token-gvnc2 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 84s default-scheduler Successfully assigned myapp-5798dd798c-t7dqs to docker-for-desktop Normal SuccessfulMountVolume 84s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gvnc2" Normal Pulled 75s kubelet, docker-for-desktop Container image "myapp:latest" already present on machine Normal Created 74s kubelet, docker-for-desktop Created container Normal Started 72s kubelet, docker-for-desktop Started container Warning Unhealthy 65s kubelet, docker-for-desktop Readiness probe failed: Get http://10.1.0.103:80/healthz/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 具有container,但是由于准备就绪探测失败,事件列表中的最后一个事件将状态列为Status: Ready。 (在应用程序日志中,我可以看到自那时以来,有更多更多的准备就绪探针传入请求,并且它们都成功完成了。)

我应该如何解释这些信息? Kubernetes是否认为我的Pod已准备好或尚未准备好?

1 个答案:

答案 0 :(得分:3)

当所有容器的就绪探测器返回成功时,容器已准备就绪。在您的情况下,就绪探针在第一次尝试中失败,但是下一次探针成功,并且容器进入就绪状态。在下面的准备就绪失败探针示例中

下面的就绪探测器在最后11m探测了58次,但失败了。

Events:
  Type     Reason     Age                  From                                   Message
  ----     ------     ----                 ----                                   -------
  Normal   Scheduled  11m                  default-scheduler                      Successfully assigned default/upnready to mylabserver.com
  Normal   Pulling    11m                  kubelet, mylabserver.com  pulling image "luksa/kubia:v3"
  Normal   Pulled     11m                  kubelet, mylabserver.com  Successfully pulled image "luksa/kubia:v3"
  Normal   Created    11m                  kubelet, mylabserver.com  Created container
  Normal   Started    11m                  kubelet, mylabserver.com  Started container
  Warning  Unhealthy  103s (x58 over 11m)  kubelet, mylabserver.com  Readiness probe failed: Get http://10.44.0.123:80/: dial tcp 10.44.0.123:80: connect: 

容器状态也未准备好,如下所示

kubectl get pods -l run=upnready
NAME       READY   STATUS    RESTARTS   AGE
upnready   0/1     Running   0          17m

在您的情况下,就绪探针通过了运行状况检查,并且您的Pod处于就绪状态。

您可以有效地使用initialDelaySeconds,periodSeconds,timeoutSeconds获得更好的结果。这是一篇文章。

article on readiness probe and liveness probe