Kubernetes Pod因磁盘压力而被驱逐

时间:2020-05-07 13:30:22

标签: amazon-web-services amazon-ec2 kubernetes kubernetes-pod

我有一个带有一个主节点和两个从属节点的k8s环境。在一个节点中,两个Pod(假设Pod-A和Pod-B)正在运行,并且由于磁盘压力而使Pod-A退出,而另一个Pod-B在同一节点中运行而没有退出。即使我检查了节点资源(内存和磁盘空间),还是有足够的可用空间。我也使用“ docker system df”检查了docker东西,那里显示图像的可回收空间为48%,所有剩余的东西为0%可回收。因此,最后我删除了所有移出的Pod-B的Pod,现在运行良好。

1)当pod-B在同一个节点上运行时,为什么pod-A被逐出了?

2)如果有足够的资源可用,为什么要驱逐pod-B?

apiVersion: datas/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.17.0 (0c01409)
  creationTimestamp: null
  labels:
    io.kompose.service: zuul
  name: zuul
spec:
  progressDeadlineSeconds: 2145893647
  replicas: 1
  revisionHistoryLimit: 2145893647
  selector:
    matchLabels:
      io.kompose.service: zuul
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: zuul
    spec:
      containers:
      - env:
        - name: DATA_DIR
          value: /data/work/
        - name: log_file_path
          value: /data/work/logs/zuul/
        - name: spring_cloud_zookeeper_connectString
          value: zoo_host:5168
        image: repository/zuul:version
        imagePullPolicy: Always
        name: zuul
        ports:
        - containerPort: 9090
          hostPort: 9090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data/work/
          name: zuul-claim0
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
         disktype: node1
      imagePullSecrets:
      - name: regcred
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /opt/DATA_DIR
          type: ""
        name: zuul-claim0
status: {}
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.17.0 (0c01409)
  creationTimestamp: null
  labels:
    io.kompose.service: routing
  name: routing
spec:
  progressDeadlineSeconds: 2148483657
  replicas: 1
  revisionHistoryLimit: 2148483657
  selector:
    matchLabels:
      io.kompose.service: routing
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: routing
    spec:
      containers:
      - env:
        - name: DATA_DIR
          value: /data/work/
        - name: log_file_path
          value: /data/logs/routing/
        - name: spring_cloud_zookeeper_connectString
          value: zoo_host:5168
        image: repository/routing:version
        imagePullPolicy: Always
        name: routing
        ports:
        - containerPort: 8090
          hostPort: 8090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data/work/
          name: routing-claim0
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
         disktype: node1
      imagePullSecrets:
      - name: regcred
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /opt/DATA_DIR
          type: ""
        name: routing-claim0
status: {}

0 个答案:

没有答案