Kubernetes初始化容器没有日志退出

时间:2020-07-01 20:54:42

标签: kubernetes kubernetes-statefulset

我正在使用Kubernetes部署使用本地数据库的服务。该服务被部署为具有3个副本的有状态集合。我有3个不同的初始化容器,但第3个容器始终失败,并带有crashLoopBackOff。第三个init容器仅删除已安装卷上的某些目录。我尝试过使用bash逻辑或仅使用rm -rf的组合来删除目录(如果存在)的多种变体。结果是相同的crashLoopBackOff,没有日志。

失败的特定初始化容器:

- name: init-snapshot
        image: camlcasetezos/tezos:mainnet
        command: 
        - sh
        - -c
        # - exit 0
        - if [ -d "/mnt/nd/node/data/store" ]; then rm -Rf /mnt/nd/node/data/store; fi
        - if [ -d "/mnt/nd/node/data/context" ]; then rm -Rf /mnt/nd/node/data/context; fi
        volumeMounts:
        - name: node-data
          mountPath: /mnt/nd
        securityContext:
          runAsUser: 100

整个StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mainnet-full-node
  labels:
    app: mainnet-full
    component: mainnet-full-node
spec:
  serviceName: mainnet-full-rpc
  replicas: 3
  selector:
    matchLabels:
      app: mainnet-full
      component: mainnet-full-node
  template:
    metadata:
      labels:
        app: mainnet-full
        component: mainnet-full-node
    spec:
      initContainers:
      - name: init-perm
        # Fix the permissions of the storage volumes--chown to the right user.
        image: library/busybox
        command: 
        - sh
        - -c
        - chown -R 100 /mnt/*
        volumeMounts:
        - name: node-data
          mountPath: /mnt/nd
        - name: node-home
          mountPath: /mnt/nh
        securityContext:
          runAsUser: 0
      - name: init-identity
        # Generate a network identity if needed (use to repair the default, then disable)
        image: camlcasetezos/tezos:mainnet
        command: 
        - sh
        - -c
        - exit 0; rm /mnt/nd/node/data/identity.json 2>&1 > /dev/null; /usr/local/bin/tezos-node identity generate 26 --data-dir=/mnt/nd/node/data
        volumeMounts:
        - name: node-data
          mountPath: /mnt/nd
        securityContext:
          runAsUser: 100
      - name: init-snapshot
        # Generate a network identity if needed (use to repair the default, then disable)
        image: camlcasetezos/tezos:mainnet
        command: 
        - sh
        - -c
        # - exit 0
        - if [ -d "/mnt/nd/node/data/store" ]; then rm -Rf /mnt/nd/node/data/store; fi
        - if [ -d "/mnt/nd/node/data/context" ]; then rm -Rf /mnt/nd/node/data/context; fi
        volumeMounts:
        - name: node-data
          mountPath: /mnt/nd
        securityContext:
          runAsUser: 100
      # We have to use host networking to get the correct address advertised?
      #hostNetwork: true
      containers:
      - name: mainnet-full-node
        image: camlcasetezos/tezos:mainnet
        args: ["tezos-node", "--history-mode", "full"]
        command: # Note the rpc address; block it from your firewall.
        - sh
        - -c
        - /usr/local/bin/tezos-node snapshot import /tmp/mainnet.full --data-dir=/var/run/tezos/node/data
        ports:
        - containerPort: 8732 # management
        - containerPort: 9732 # p2p service
        volumeMounts:
        - name: node-data
          mountPath: "/var/run/tezos"
        - name: node-home
          mountPath: "/home/tezos"
        - name: node-config
          mountPath: /home/tezos/.tezos-node
        - name: local-client-config
          mountPath: /home/tezos/.tezos-client
        securityContext:
          # emperically, this is the uid that gets chosen for the 'tezos'
          # user. Make it explicit.
          runAsUser: 100
      volumes:
      - name: node-data
        persistentVolumeClaim:
          claimName: node-data
      - name: node-config
        configMap:
          name: configs
          items:
          - key: node-config
            path: config
      - name: local-client-config
        configMap:
          name: configs
          items:
          - key: local-client-config
            path: config
  volumeClaimTemplates:
  - metadata:
      name: node-data
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 100Gi
      storageClassName: do-block-storage
  - metadata:
      name: node-home
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 1Gi
      storageClassName: do-block-storage

1 个答案:

答案 0 :(得分:1)

尝试使用kubectl logs -p podname获取以前的日志。

由于它处于崩溃循环中,因此您只能在Pod崩溃之前看到pod日志。

如果这不起作用,请尝试kubectl describe pod podname并查看底部显示的事件。通常,即使CrashLoopBackoff中有某些内容,即使Pod本身无法启动,事件中也至少会有一些内容。