我有带有docker和minikube运行的Virtual Box。
opt/spark/bin/spark-submit --master k8s://https://192.168.99.101:8443 --name cfe2 --deploy-mode cluster --class com.yyy.Application --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=docker.io/anantpukale/spark_app:1.3 local://CashFlow-spark2.3.0-shaded.jar
start time: N/A
container images: N/A
phase: Pending
status: []
2018-04-11 09:57:52 INFO LoggingPodStatusWatcherImpl:54 - 状态已更改,新状态: pod名称:cfe2-c4f95aaeaefb3564b8106ad86e245457-driver 命名空间:默认 标签:spark-app-selector - > spark-dab914d1d34b4ecd9b747708f667ec2b,spark-role - >司机 pod uid:cc3b39e1-3d6e-11e8-ab1d-080027fcb315 创作时间:2018-04-11T09:57:51Z 服务帐户名称:默认 卷:default-token-v48xb 节点名称:minikube 开始时间:2018-04-11T09:57:51Z 容器图片:docker.io/anantpukale/spark_app:1.3 阶段:待定 status:[ContainerStatus(containerID = null,image = docker.io / anantpukale / spark_app:1.3,imageID =,lastState = ContainerState(running = null,terminated = null,waiting = null,additionalProperties = {}),name = spark- kubernetes-driver,ready = false,restartCount = 0,state = ContainerState(running = null,terminated = null,waiting = ContainerStateWaiting(message = null,reason = ContainerCreating,additionalProperties = {}),additionalProperties = {}),additionalProperties = {})] 2018-04-11 09:57:52 INFO客户:54 - 等待申请cfe2完成... 2018-04-11 09:57:52 INFO LoggingPodStatusWatcherImpl:54 - 状态改变,新状态: pod名称:cfe2-c4f95aaeaefb3564b8106ad86e245457-driver 命名空间:默认 标签:spark-app-selector - > spark-dab914d1d34b4ecd9b747708f667ec2b,spark-role - >司机 pod uid:cc3b39e1-3d6e-11e8-ab1d-080027fcb315 创作时间:2018-04-11T09:57:51Z 服务帐户名称:默认 卷:default-token-v48xb 节点名称:minikube 开始时间:2018-04-11T09:57:51Z 容器图片:anantpukale / spark_app:1.3 阶段:失败 的状态:[ContainerStatus(数据筒=搬运工:// 40eae507eb9b615d3dd44349e936471157428259f583ec6a8ba3bd99d80b013e,图像= anantpukale / spark_app:1.3,图像标识=搬运工-可拉:// anantpukale / spark_app @ SHA256:f61b3ef65c727a3ebd8a28362837c0bc90649778b668f78b6a33b7c0ce715227,lastState = ContainerState(运行= NULL,终止= NULL,等待= null,additionalProperties = {}),name = spark-kubernetes-driver,ready = false,restartCount = 0,state = ContainerState(running = null,terminated = ContainerStateTerminated(containerID = docker:// 40eae507eb9b615d3dd44349e936471157428259f583ec6a8ba3bd99d80b013e,exitCode = 127,finishedAt =时间(时间= 2018-04-11T09:57:52Z,additionalProperties = {}),message =无效的头字段值“oci运行时错误:container_linux.go:247:启动容器进程导致\”exec:\ \“driver \\”:在$ PATH \“\ n”中找不到可执行文件,reason = ContainerCannotRun,signal = null,startedAt = Time(time = 2018-04-11T09:57:52Z,additionalProperties = {}),additionalProperties = {}),waiting = nu ll,additionalProperties = {}),additionalProperties = {})] 2018-04-11 09:57:52 INFO LoggingPodStatusWatcherImpl:54 - 容器最终状态: 容器名称:spark-kubernetes-driver 容器图片:anantpukale / spark_app:1.3 容器状态:已终止 退出代码:127 2018-04-11 09:57:52 INFO客户:54 - 应用程序cfe2完成。 2018-04-11 09:57:52 INFO ShutdownHookManager:54 - 关闭挂钩调用 2018-04-11 09:57:52 INFO ShutdownHookManager:54 - 删除目录/ tmp / spark-d5813d6e-a4af-4bf6-b1fc-dc43c75cd643
错误跟踪表明docker中的某些内容已通过命令“docker”触发。 dockerfile
答案 0 :(得分:1)
我遇到了这个问题。它与Docker映像ENTRYPOINT相关。在使用Kubernetes的spark 2.3.0中,现在有一个Dockerfile的示例,该示例在kubernetes / dockerfiles /中的ENTRYPOINT中使用特定脚本。如果docker映像未使用该特定脚本作为ENTRYPOINT,则容器将无法正确启动。 Spark Kubernetes Docker documentation
答案 1 :(得分:0)
ENV PATH="/opt/spark/bin:${PATH}"
代替你的行。
答案 2 :(得分:0)
您是否可以使用
登录容器#>docker run -it --rm docker.io/anantpukale/spark_app:1.3 sh
并尝试运行您要提交的主程序或命令 根据此输出,我们可以尝试进一步调查。
答案 3 :(得分:0)
与@hichamx一起建议使用以下代码进行更改,以便我克服 \" exec:\" driver \" 问题。 spark-submit --master k8s://http://127.0.0.1:8001 --name cfe2 --deploy-mode cluster --class com.oracle.Test --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=docker/anantpukale/spark_app:1.1 --conf spark.kubernetes.driver.container.image=docker.io/kubespark/spark-driver:v2.2.0-kubernetes-0.5.0 --conf spark.kubernetes.executor.container.image=docker.io/kubespark/spark-executor:v2.2.0-kubernetes-0.5.0 local://spark-0.0.1-SNAPSHOT.jar
虽然这给出了错误:退出代码:127并且spark-kubernetes-driver终止。