Kubernetes上的Spark UI History服务器?

时间:2018-08-11 10:51:38

标签: apache-spark kubernetes

通过提交火花,我在Kubernetes集群上启动了应用程序。而且只有当我转到http://driver-pod:port时才能看到Spark-UI。

如何在集群上启动Spark-UI历史记录服务器? 如何使所有正在运行的Spark作业都在Spark-UI历史记录服务器上注册。

这可能吗?

1 个答案:

答案 0 :(得分:1)

是的,有可能。简要地说,您需要确保以下几点:

  • 确保所有应用程序都将事件日志存储在特定位置(filesystems3hdfs等)。
  • 可以访问上述事件日志位置来在集群中部署历史记录服务器。

现在,火花(默认情况下)仅从filesystem路径读取,因此,我将使用spark operator详细说明这种情况:

  • 使用支持ReadWriteMany模式的卷类型创建一个PVC。例如NFS卷。以下代码段假定您已经为NFSnfs-volume)配置了存储类:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-pvc
  namespace: spark-apps
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-volume
  • 确保您所有的spark应用程序均已启用事件日志记录并使用正确的路径:
  sparkConf:
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "file:/mnt"
  • 在事件日志卷已安装到每个应用程序的情况下(您也可以使用操作员变异Web挂钩将其集中)。带有配置的示例清单如下所示:
---
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-java-pi
  namespace: spark-apps

spec:
  type: Java
  mode: cluster

  image: gcr.io/spark-operator/spark:v2.4.4
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar"

  imagePullPolicy: Always
  sparkVersion: 2.4.4
  sparkConf:
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "file:/mnt"
  restartPolicy:
    type: Never
  volumes:
    - name: spark-data
      persistentVolumeClaim:
        claimName: spark-pvc
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 2.4.4
    serviceAccount: spark
    volumeMounts:
      - name: spark-data
        mountPath: /mnt
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 2.4.4
    volumeMounts:
      - name: spark-data
        mountPath: /mnt

  • 安装安装了共享卷的Spark历史记录服务器。然后,您将在历史记录服务器用户界面中拥有访问事件:
apiVersion: apps/v1beta1
kind: Deployment

metadata:
  name: spark-history-server
  namespace: spark-apps

spec:
  replicas: 1

  template:
    metadata:
      name: spark-history-server
      labels:
        app: spark-history-server

    spec:
      containers:
        - name: spark-history-server
          image: gcr.io/spark-operator/spark:v2.4.0

          resources:
            requests:
              memory: "512Mi"
              cpu: "100m"

          command:
            -  /sbin/tini
            - -s
            - --
            - /opt/spark/bin/spark-class
            - -Dspark.history.fs.logDirectory=/data/
            - org.apache.spark.deploy.history.HistoryServer

          ports:
            - name: http
              protocol: TCP
              containerPort: 18080

          readinessProbe:
            timeoutSeconds: 4
            httpGet:
              path: /
              port: http

          livenessProbe:
            timeoutSeconds: 4
            httpGet:
              path: /
              port: http

          volumeMounts:
            - name: data
              mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: spark-pvc
          readOnly: true

可以随意配置IngressService来访问UIenter image description here

您还可以使用Google Cloud Storage,Azrue Blob Storage或AWS S3作为事件日志位置。为此,您将需要安装一些额外的jars,因此建议您查看lightbend spark历史记录服务器imagecharts