我正在尝试从Kubernetes窗格运行spark-submit命令。该属性文件作为configmap挂载。这是Pod的json文件:
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "spark-submit-pod",
"labels": {
"app": "myapp"
}
},
"spec": {
"serviceAccountName": "spark",
"volumes": [
{
"name": "prop-volume",
"configMap": {
"name": "prop"
}
},
{
"name": "pem-volume",
"configMap": {
"name": "pem"
}
}
],
"imagePullSecrets": [
{
"name": "<name of secret>"
}
],
"containers": [
{
"name": "test-container",
"image": "<image>",
"volumeMounts": [
{
"mountPath": "/etc/config",
"name": "pem-volume"
},
{
"mountPath": "/etc/prop",
"name": "prop-volume"
}
],
"command": [
"/opt/spark/bin/spark-submit",
"--name",
"SparkPi",
"--deploy-mode",
"cluster",
"--master",
"k8s://<path to k8s cluster>,
"--class",
"org.apache.spark.examples.SparkPi",
"/opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar",
"--properties-file=/etc/prop/properties.conf"
]
}
],
"restartPolicy": "Never"
}
}
使用多个--conf通过命令部署同一Pod时,spark-submit提交成功。但是上面的命令在pod的日志中给出了以下错误:
Exception in thread "main" org.apache.spark.SparkException: Must specify the driver container image
属性文件确实包含spark.kubernetes.container.image
,并且可以访问(尝试了cat
命令)。