K8s没有杀死我的气流网络服务器吊舱

时间:2017-12-02 00:13:24

标签: linux docker kubernetes airflow

我的气流在k8s容器中运行。

网络服务器遇到DNS错误(无法将我的数据库的网址转换为IP)并且网络服务器工作人员被杀死。

令我感到不安的是,k8s没有试图杀死吊舱并开始新的吊舱。

Pod日志输出:

OperationalError: (psycopg2.OperationalError) could not translate host name "my.dbs.url" to address: Temporary failure in name resolution
[2017-12-01 06:06:05 +0000] [2202] [INFO] Worker exiting (pid: 2202)
[2017-12-01 06:06:05 +0000] [2186] [INFO] Worker exiting (pid: 2186)
[2017-12-01 06:06:05 +0000] [2190] [INFO] Worker exiting (pid: 2190)
[2017-12-01 06:06:05 +0000] [2194] [INFO] Worker exiting (pid: 2194)
[2017-12-01 06:06:05 +0000] [2198] [INFO] Worker exiting (pid: 2198)
[2017-12-01 06:06:06 +0000] [13] [INFO] Shutting down: Master
[2017-12-01 06:06:06 +0000] [13] [INFO] Reason: Worker failed to boot.

k8s状态是RUNNING但是当我在k8s UI中打开一个exec shell时,我得到以下输出(gunicorn似乎意识到它已经死了):

root@webserver-373771664-3h4v9:/# ps -Al
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
4 S     0     1     0  0  80   0 - 107153 -     ?        00:06:42 /usr/local/bin/
4 Z     0    13     1  0  80   0 -     0 -      ?        00:01:24 gunicorn: maste <defunct>
4 S     0  2206     0  0  80   0 -  4987 -      ?        00:00:00 bash
0 R     0  2224  2206  0  80   0 -  7486 -      ?        00:00:00 ps

以下是我部署的YAML:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webserver
  namespace: airflow
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: airflow-webserver
    spec:
      volumes:
      - name: webserver-dags
        emptyDir: {}
      containers:
      - name: airflow-webserver
        image: my.custom.image :latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        ports:
        - containerPort: 80
          protocol: TCP
        env:
        - name: AIRFLOW_HOME
          value: /var/lib/airflow
        - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
          valueFrom:
            secretKeyRef:
              name: db1
              key: sqlalchemy_conn
        volumeMounts:
        - mountPath: /var/lib/airflow/dags/
          name: webserver-dags
        command: ["airflow"]
        args: ["webserver"]
      - name: docker-s3-to-backup
        image: my.custom.image:latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 50m
          limits:
            cpu: 500m
        env:
        - name: ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: access_key_id
        - name: SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: secret_access_key
        - name: S3_PATH
          value: s3://my-s3-bucket/dags/
        - name: DATA_PATH
          value: /dags/
        - name: CRON_SCHEDULE
          value: "*/5 * * * *"
        volumeMounts:
        - mountPath: /dags/
          name: webserver-dags
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: webserver
  namespace: airflow
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: webserver
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 75
---
apiVersion: v1
kind: Service
metadata:
  labels:
  name: webserver
  namespace: airflow
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: airflow-webserver

2 个答案:

答案 0 :(得分:2)

您需要定义准备就绪和活性探测Kubernetes以检测POD状态。

就像本页记录的那样。 https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe

 - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

答案 1 :(得分:1)

好吧,当进程在容器中死亡时,此容器将退出,而kubelet将在同一节点上/同一个pod中重新启动容器。这里发生的事情绝不是kubernetes的错,但实际上是你容器的问题。您在容器中启动的主要过程(无论是从CMD还是通过ENTRYPOINT)都需要死亡,因为上述情况发生了,而您启动的那个没有(一个是僵尸模式,但没有收获,这是一个另一个问题的例子 - zombie reapingLiveness probe将在这种情况下提供帮助(如@sfgroups所述),因为如果它失败,它将终止pod,但这是治疗症状而不是根本原因(并不是说你不应该将探针定义为一种良好做法。