Google kubernetes集群上的待处理Spark Pod:CPU不足

时间:2018-10-11 15:05:20

标签: docker apache-spark kubernetes google-cloud-platform

我正在尝试通过Spark-Submit向Google kubernetes集群提交一个Spark作业。

docker镜像是从2.3.0版本中的spark官方dockerfile构建的。

以下是提交脚本。

#! /bin/bash
spark-submit \
--master k8s://https://<master url> \
--deploy-mode cluster \
--conf spark.executor.instances=1 \
--conf spark.kubernetes.container.image=<official image> \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.app.name=app-name \
--class ExpletivePI \
--name spark-pi \
local:///opt/spark/examples/spark-demo.jar

我可以在本地minikube上完美运行它。

但是,当我尝试将其提交到我的Google kubernetes集群时。由于CPU不足,我总是按计划进行播客。

0/3 nodes are available: 3 Insufficient cpu. 

kubectl描述节点似乎还可以,这是有问题的pod描述结果

Name:         spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver
Namespace:    default
Node:         <none>
Labels:       spark-app-selector=spark-3e8ff877bebd46be9fc8d956531ba186
              spark-role=driver
Annotations:  spark-app-name=spark-pi
Status:       Pending
IP:           
Containers:
  spark-kubernetes-driver:
    Image:      geekbeta/spark:v2
    Port:       <none>
    Host Port:  <none>
    Args:
      driver
    Limits:
      memory:  1408Mi
    Requests:
      cpu:     1
      memory:  1Gi
    Environment:
      SPARK_DRIVER_MEMORY:        1g
      SPARK_DRIVER_CLASS:         ExpletivePI
      SPARK_DRIVER_ARGS:          
      SPARK_DRIVER_BIND_ADDRESS:   (v1:status.podIP)
      SPARK_MOUNTED_CLASSPATH:    /opt/spark/tang_stuff/spark-demo.jar:/opt/spark/tang_stuff/spark-demo.jar
      SPARK_JAVA_OPT_0:           -Dspark.app.name=spark-pi
      SPARK_JAVA_OPT_1:           -Dspark.app.id=spark-3e8ff877bebd46be9fc8d956531ba186
      SPARK_JAVA_OPT_2:           -Dspark.driver.host=spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver-svc.default.svc
      SPARK_JAVA_OPT_3:           -Dspark.submit.deployMode=cluster
      SPARK_JAVA_OPT_4:           -Dspark.driver.blockManager.port=7079
      SPARK_JAVA_OPT_5:           -Dspark.kubernetes.executor.podNamePrefix=spark-pi-e890cd00394b3b20942f22d0a9173c1c
      SPARK_JAVA_OPT_6:           -Dspark.master=k8s://https://35.229.152.59
      SPARK_JAVA_OPT_7:           -Dspark.kubernetes.authenticate.driver.serviceAccountName=spark
      SPARK_JAVA_OPT_8:           -Dspark.executor.instances=1
      SPARK_JAVA_OPT_9:           -Dspark.kubernetes.container.image=geekbeta/spark:v2
      SPARK_JAVA_OPT_10:          -Dspark.kubernetes.driver.pod.name=spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver
      SPARK_JAVA_OPT_11:          -Dspark.jars=/opt/spark/tang_stuff/spark-demo.jar,/opt/spark/tang_stuff/spark-demo.jar
      SPARK_JAVA_OPT_12:          -Dspark.driver.port=7078
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from spark-token-9gdsb (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  spark-token-9gdsb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  spark-token-9gdsb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  3m (x125 over 38m)  default-scheduler  0/3 nodes are available: 3 Insufficient cpu.

我的群集具有3 cpus和11G RAM,我真的很困惑,不知道该怎么办,非常感谢您提出任何建议或意见!

1 个答案:

答案 0 :(得分:0)

问题解决了,似乎默认情况下,驱动程序pod需要1个CPU,在我的情况下,GCP无法容纳它,因为我的GCP群集上的每个节点只有一个CPU。

通过将驱动程序pod请求cpu更改为较低的值,它可以在GCP上运行