AKS Pod CrashLoopBackOff上的Helm Kubernetes

时间:2019-02-04 16:10:46

标签: docker kubernetes azure-kubernetes azure-aks kubernetes-helm

我正在尝试通过掌舵kubernetes在Azure Kubernetes Service上部署一个简单的nodejs应用程序,但是在提取我的映像后,它显示为CrashLoopBackOff

这是我到目前为止尝试过的:

我的Dockerfile

FROM node:6
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]

我的server.js

'use strict';

const express = require('express');

const PORT = 32000;
const HOST = '0.0.0.0';

const app = express();
app.get('/', (req, res) => {
  res.send('Hello world from container.\n');
});

app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);

我已将此图片推送到ACR。

  

新更新:这是kubectl describe pod POD_NAME的完整输出:

Name:               myrel02-mychart06-5dc9d4b86c-kqg4n
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-nodepool1-19665249-0/10.240.0.6
Start Time:         Tue, 05 Feb 2019 11:31:27 +0500
Labels:             app.kubernetes.io/instance=myrel02
                    app.kubernetes.io/name=mychart06
                    pod-template-hash=5dc9d4b86c
Annotations:        <none>
Status:             Running
IP:                 10.244.2.5
Controlled By:      ReplicaSet/myrel02-mychart06-5dc9d4b86c
Containers:
  mychart06:
    Container ID:   docker://c239a2b9c38974098bbb1646a272504edd2d199afa50f61d02a0ce335fe60660
    Image:          registry-1.docker.io/arycloud/docker-web-app:0.5
    Image ID:       docker-pullable://registry-1.docker.io/arycloud/docker-web-app@sha256:4faab280d161b727e0a6a6d9dfb52b22cf9c6cd7dd07916d6fe164d9af5737a7
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 05 Feb 2019 11:39:56 +0500
      Finished:     Tue, 05 Feb 2019 11:40:22 +0500
    Ready:          False
    Restart Count:  7
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
      KUBERNETES_PORT:               tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gm49w (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-gm49w:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gm49w
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                               Message
  ----     ------     ----                   ----                               -------
  Normal   Scheduled  10m                    default-scheduler                  Successfully assigned default/myrel02-mychart06-5dc9d4b86c-kqg4n to aks-nodepool1-19665249-0
  Normal   Pulling    10m                    kubelet, aks-nodepool1-19665249-0  pulling image "registry-1.docker.io/arycloud/docker-web-app:0.5"
  Normal   Pulled     10m                    kubelet, aks-nodepool1-19665249-0  Successfully pulled image "registry-1.docker.io/arycloud/docker-web-app:0.5"
  Warning  Unhealthy  9m30s (x6 over 10m)    kubelet, aks-nodepool1-19665249-0  Liveness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
  Normal   Created    9m29s (x3 over 10m)    kubelet, aks-nodepool1-19665249-0  Created container
  Normal   Started    9m29s (x3 over 10m)    kubelet, aks-nodepool1-19665249-0  Started container
  Normal   Killing    9m29s (x2 over 9m59s)  kubelet, aks-nodepool1-19665249-0  Killing container with id docker://mychart06:Container failed liveness probe.. Container will be killed and recreated.
  Warning  Unhealthy  9m23s (x7 over 10m)    kubelet, aks-nodepool1-19665249-0  Readiness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
  Normal   Pulled     5m29s (x6 over 9m59s)  kubelet, aks-nodepool1-19665249-0  Container image "registry-1.docker.io/arycloud/docker-web-app:0.5" already present on machine
  Warning  BackOff    22s (x33 over 7m59s)   kubelet, aks-nodepool1-19665249-0  Back-off restarting failed container
  

更新: docker logs CONTAINER_ID输出:

> nodejs@1.0.0 start /usr/src/app
> node server.js

Running on http://0.0.0.0:32000

如何避免这个问题?

谢谢!

2 个答案:

答案 0 :(得分:0)

kubectl describe pod命令输出中可以看到,您Pod中的Container已完成,退出代码为0(注释中提到了@ 4c74356b41)。 Reason: Completed,表示成功完成,没有任何错误/问题。但是,该Pod的生命周期非常短,因此Kubernetes会不断安排新Pod的运行时间,但是“活动性”和“就绪”探针仍然无法解决容器的健康问题。

要使Pod保持运行,必须在容器内部指定一个任务(进程),以便能够连续运行。有关如何解决此类问题的讨论和解决方案很多,here可以找到更多的提示。

答案 1 :(得分:0)

kubectl logs命令仅在Pod已启动并正在运行时才有效。如果不是,则可以使用kubectl events命令。它会为您提供一些事件日志,有时(以我的经验)会为您提供正在发生的事情的线索。

kubectl get events -n <your_app_namespace> --sort-by='.metadata.creationTimestamp'

默认情况下,它不对事件进行排序,因此--sort-by标志。