在没有日志的初始化状态后,Elasticsearch Pod失败

时间:2018-09-24 15:08:25

标签: azure elasticsearch kubernetes azure-aks

我正在尝试让Elasticsearch StatefulSet在AKS上工作,但pod失败并且在我看不到任何日志之前被终止了。在Pod终止后是否可以查看日志?

这是我与kubectl apply -f es-statefulset.yaml一起运行的示例YAML文件:

# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: kube-system
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v6.4.1
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v6.4.1
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v6.4.1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
        name: elasticsearch-logging
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: "1000m"
            memory: "2048Mi"
          requests:
            cpu: "100m"
            memory: "1024Mi"
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: "bootstrap.memory_lock"
          value: "true"
        - name: "ES_JAVA_OPTS"
          value: "-Xms1024m -Xmx2048m"
        - name: "discovery.zen.ping.unicast.hosts"
          value: "elasticsearch-logging"
      # A) This volume mount (emptyDir) can be set whenever not working with a
      # cloud provider. There will be no persistence. If you want to avoid
      # data wipeout when the pod is recreated make sure to have a
      # "volumeClaimTemplates" in the bottom.
      # volumes:
      # - name: elasticsearch-logging
      #   emptyDir: {}
      #
      # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      initContainers:
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init
        securityContext:
          privileged: true
  # B) This will request storage on Azure (configure other clouds if necessary)
  volumeClaimTemplates:
    - metadata:
        name: elasticsearch-logging
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: default
        resources:
          requests:
            storage: 64Gi

当我“关注”时,创建的吊舱如下所示:

enter image description here

我尝试做logs -n kube-system elasticsearch-logging-0 -p并注意从终止的实例中获取日志。

我正在尝试建立在this sample from the official (unmaintained) k8s repo之上。最初是可行的,但是当我尝试更新部署后,我完全失败了,而且无法将其恢复。我正在使用Azure AKS的试用版

我感谢任何建议

编辑1:

kubectl describe statefulset elasticsearch-logging -n kube-system的结果如下(具有几乎相同的Init-Terminated pod流):

Name:               elasticsearch-logging
Namespace:          kube-system
CreationTimestamp:  Mon, 24 Sep 2018 10:09:07 -0600
Selector:           k8s-app=elasticsearch-logging,version=v6.4.1
Labels:             addonmanager.kubernetes.io/mode=Reconcile
                    k8s-app=elasticsearch-logging
                    kubernetes.io/cluster-service=true
                    version=v6.4.1
Annotations:        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"elasticsea...
Replicas:           0 desired | 1 total
Update Strategy:    RollingUpdate
Pods Status:        0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=elasticsearch-logging
                    kubernetes.io/cluster-service=true
                    version=v6.4.1
  Service Account:  elasticsearch-logging
  Init Containers:
   elasticsearch-logging-init:
    Image:      alpine:3.6
    Port:       <none>
    Host Port:  <none>
    Command:
      /sbin/sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:       <none>
  Containers:
   elasticsearch-logging:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:6.4.1
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     100m
      memory:  1Gi
    Environment:
      NAMESPACE:                          (v1:metadata.namespace)
      bootstrap.memory_lock:             true
      ES_JAVA_OPTS:                      -Xms1024m -Xmx2048m
      discovery.zen.ping.unicast.hosts:  elasticsearch-logging
    Mounts:
      /data from elasticsearch-logging (rw)
  Volumes:  <none>
Volume Claims:
  Name:          elasticsearch-logging
  StorageClass:  default
  Labels:        <none>
  Annotations:   <none>
  Capacity:      64Gi
  Access Modes:  [ReadWriteMany]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  53s   statefulset-controller  create Pod elasticsearch-logging-0 in StatefulSet elasticsearch-logging successful
  Normal  SuccessfulDelete  1s    statefulset-controller  delete Pod elasticsearch-logging-0 in StatefulSet elasticsearch-logging successful

流程保持不变:

enter image description here

2 个答案:

答案 0 :(得分:0)

您假设Pod因与ES相关的错误而终止。
我不太确定ES甚至必须开始运行,这应该可以解释缺少日志的原因。

拥有多个具有相同名称的Pod非常可疑,尤其是在StatefulSet中,因此那里出了问题。
我先尝试kubectl describe statefulset elasticsearch-logging -n kube-system,这应该解释发生了什么-可能在将卷之前装入到运行ES时会出现一些问题。

我也很确定您要将ReadWriteOnce更改为ReadWriteMany

希望这会有所帮助!

答案 1 :(得分:0)

是的。有办法您可以将SSH放入运行Pod的机器中,并假设您正在使用Docker,则可以运行:

public class AudioProcessor {

    private Context context;
    private FFmpeg ffmpeg;
    private AudioProcessorListener listener;

    private File micPcmFile;
    private File backgroundMp3File;

    private File pcmtowavTempFile;
    private File mp3towavTempFile;
    private File combinedwavTempFile;

    private File outputFile;
    private File volumeChangedTempFile;


    public AudioProcessor(Context context) {
        ffmpeg = FFmpeg.getInstance(context);
        this.context = context;
    }

    /**
     * Program main method. Starts running program
     * @throws Exception
     */
    public void process() throws Exception {
        if (!ffmpeg.isSupported()) {
            Log.e("AudioProcessor", "FFMPEG not supported! Cannot convert audio!");
            throw new Exception("FFMPeg has to be supported");
        }
        if (!checkIfAllFilesPresent()) {
            Log.e("AudioProcessor", "All files are not set yet. Please set file first");
            throw new Exception("Files are not set!");
        }
        listener.onStart();
        prepare();
        convertPCMToWav();
    }

    /**
     * Prepares program
     */
    private void prepare() {
        prepareTempFiles();
    }

    /**
     * Converts PCM to wav file. Automatically create new file.
     */
    private void convertPCMToWav() {
        System.out.println("AudioProcessor: Convert PCM TO Wav");
        //ffmpeg -f s16le -ar 44.1k -ac 2 -i file.pcm file.wav
        String[] cmd = { "-f" , "s16le", "-ar", "44.1k",  "-i", micPcmFile.toString(), pcmtowavTempFile.toString()};
        ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
            @Override
            public void onSuccess(String message) {
                super.onSuccess(message);
                convertMP3ToWav();
            }
            @Override
            public void onFailure(String message) {
                super.onFailure(message);
                onError(message);
            }
        });
    }

    /**
     * Converts mp3 file to wav file.
     * Automatically creates Wav file
     */
    private void convertMP3ToWav() {
        //ffmpeg -i file.mp3 file.wav
        String[] cmd = { "-i" , backgroundMp3File.toString(), mp3towavTempFile.toString() };
        ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
            @Override
            public void onSuccess(String message) {
                super.onSuccess(message);
                changeMicAudio();
            }
            @Override
            public void onFailure(String message) {
                super.onFailure(message);
                onError(message);
            }
        });
    }

    /**
     * Combines 2 wav files into one wav file. Overlays audio
     */
    private void combineWavs() {
        //ffmpeg -i C:\Users\VR1\Desktop\_mp3.wav -i C:\Users\VR1\Desktop\_pcm.wav -filter_complex amix=inputs=2:duration=first:dropout_transition=3 C:\Users\VR1\Desktop\out.wav

        String[] cmd = { "-i" , pcmtowavTempFile.toString(), "-i", volumeChangedTempFile.toString(), "-filter_complex", "amix=inputs=2:duration=first:dropout_transition=3", combinedwavTempFile.toString()};
        ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {

            @Override
            public void onSuccess(String message) {
                super.onSuccess(message);
                encodeWavToAAC();
            }
            @Override
            public void onFailure(String message) {
                super.onFailure(message);
                onError(message);
            }
        });
    }

    private void changeMicAudio(){
        //ffmpeg -i input.wav -filter:a "volume=1.5" output.wav

        String[] cmdy = { "-i", mp3towavTempFile.toString(),  "-af", "volume=0.9", volumeChangedTempFile.toString()};
        ffmpeg.execute(cmdy, new ExecuteBinaryResponseHandler() {

            @Override
            public void onSuccess(String message) {
                combineWavs();
                super.onSuccess(message);
            }
            @Override
            public void onFailure(String message) {
                super.onFailure(message);

            }
        });
    }


    /**
     * Do something on error. Releases program data (deletes files)
     * @param message
     */
    private void onError(String message) {
        release();
        if (listener != null) {
            listener.onError(message);
        }
    }
    /**
     * Encode to AAC
     */
    private void encodeWavToAAC() {
        //ffmpeg -i file.wav -c:a aac -b:a 128k -f adts output.m4a
        String[] cmd = { "-i" , combinedwavTempFile.toString(), "-c:a", "aac", "-b:a", "128k", "-f", "adts", outputFile.toString()};
        ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {

            @Override
            public void onSuccess(String message) {
                super.onSuccess(message);
                if (listener != null) {
                    listener.onSuccess(outputFile);
                }
                release();
            }
            @Override
            public void onFailure(String message) {
                super.onFailure(message);
                onError(message);
            }
        });
    }

    /**
     * Uninitializes class
     */
    private void release() {
        if (listener != null) {
            listener.onFinish();
        }
        destroyTempFiles();
    }

    /**
     * Prepares temp required files by deleteing them if they exsist.
     * Files cannot exists before ffmpeg actions. FFMpeg automatically creates those files.
     */
    private void prepareTempFiles() {
        pcmtowavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_pcm.wav");
        mp3towavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_mp3.wav");
        combinedwavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_combined.wav");
        volumeChangedTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_volumeChanged.wav");

        if (pcmtowavTempFile.exists()) {
            destroyTempFiles();
        }
    }

    /**
     * Destroys temp required files
     */
    private void destroyTempFiles() {
        pcmtowavTempFile.delete();
        mp3towavTempFile.delete();
        combinedwavTempFile.delete();
        volumeChangedTempFile.delete();
    }

    /**
     * Checks if all files are set, so we can process them
     * @return - all files ready
     */
    private boolean checkIfAllFilesPresent() {
        if(micPcmFile == null || backgroundMp3File == null || outputFile == null) {
            Log.e("AudioProcessor", "All files are not set! Set all files!");
            return false;
        }
        return true;
    }

    public void setOutputFile(File outputFile) {
        this.outputFile = outputFile;
    }

    public void setListener(AudioProcessorListener listener) {
        this.listener = listener;
    }

    public void setMicPcmFile(File micPcmFile) {


        this.micPcmFile = micPcmFile;
    }

    public void setBackgroundMp3File(File backgroundMp3File) {
        this.backgroundMp3File = backgroundMp3File;
    }


    public interface AudioProcessorListener {
        void onStart();
        void onSuccess(File output);
        void onError(String message);
        void onFinish();
    }
}

然后:

docker ps -a # Shows all the Exited containers (some of those, part of your pod)

如果您使用的是CRIOContainerd,这也可以,

docker logs <container-id-of-your-exited-elasticsearch-container>