我尝试使用spark 2.3 native kubernetes部署功能在kubernetes集群上运行简单的spark代码。
我正在运行kubernetes集群。此时,spark代码不读取或写入数据。它从列表中创建一个RDD并打印出结果,只是为了验证在spark上运行kubernetes的能力。另外,也复制了kubernetes容器图像中的spark app jar。
以下是我运行的命令。
bin/spark-submit --master k8s://https://k8-master --deploy-mode cluster --name sparkapp --class com.sparrkonk8.rdd.MockWordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=myapp/sparkapp:1.0.0 local:///SparkApp.jar
2018-03-06 10:31:28 INFO LoggingPodStatusWatcherImpl:54 - 状态 已更改,新状态:pod名称: sparkapp-6e475a6ae18d3b7a89ca2b5f6ae7aae4-driver命名空间:默认 标签:spark-app-selector - > spark-9649dd66e9a946d989e2136d342ef249,spark-role - >司机吊舱 uid:6d3e98cf-2153-11e8-85af-1204f474c8d2创建时间: 2018-03-06T15:31:23Z服务帐户名称:默认卷: default-token-vwxvr节点名称:192-168-1-1.myapp.engg.com start 时间:2018-03-06T15:31:23Z集装箱图片: dockerhub.com/myapp/sparkapp:1.0.0阶段:失败状态: [ContainerStatus(数据筒=搬运工:// 3617a400e4604600d5fcc69df396facafbb2d9cd485a63bc324c1406e72f0d35, 图像= dockerhub.com / MyApp的/ sparkapp:1.0.0, 图像标识=搬运工-可拉://dockerhub.com/sparkapp@sha256:f051d86384422dff3e8c8a97db823de8e62af3ea88678da4beea3f58cdb924e5, lastState = ContainerState(running = null,terminate = null,waiting = null, additionalProperties = {}),name = spark-kubernetes-driver,ready = false, restartCount = 0,state = ContainerState(running = null, 终止= ContainerStateTerminated(数据筒=搬运工:// 3617a400e4604600d5fcc69df396facafbb2d9cd485a63bc324c1406e72f0d35, exitCode = 1,finishedAt =时间(时间= 2018-03-06T15:31:24Z, additionalProperties = {}),message = null,reason = Error,signal = null, startedAt =时间(时间= 2018-03-06T15:31:24Z,additionalProperties = {}), additionalProperties = {}),waiting = null,additionalProperties = {}), additionalProperties = {})] 2018-03-06 10:31:28 INFO LoggingPodStatusWatcherImpl:54 - 容器最终状态:
容器名称:spark-kubernetes-driver容器图片: myapp / sparkapp:1.0.0容器状态:终止退出代码:1
答案 0 :(得分:0)
以下是驱动程序pod提交的spark配置。我从K8s用户界面中取出了这个。 @TobiSH让我知道这是否有助于解决我的问题。
SPARK_DRIVER_MEMORY: 1g
SPARK_DRIVER_CLASS: com.sparrkonk8.rdd.MockWordCount
SPARK_DRIVER_ARGS:
SPARK_DRIVER_BIND_ADDRESS:
SPARK_MOUNTED_CLASSPATH: /SparkApp.jar:/SparkApp.jar
SPARK_JAVA_OPT_0: -Dspark.kubernetes.executor.podNamePrefix=sparkapp-028d46fa109e309b8dfe1a4eceb46b61
SPARK_JAVA_OPT_1: -Dspark.app.name=sparkapp
SPARK_JAVA_OPT_2: -Dspark.kubernetes.driver.pod.name=sparkapp-028d46fa109e309b8dfe1a4eceb46b61-driver
SPARK_JAVA_OPT_3: -Dspark.executor.instances=5
SPARK_JAVA_OPT_4: -Dspark.submit.deployMode=cluster
SPARK_JAVA_OPT_5: -Dspark.driver.blockManager.port=7079
SPARK_JAVA_OPT_6: -Dspark.kubernetes.container.image=docker.com/myapp/sparkapp:1.0.0
SPARK_JAVA_OPT_7: -Dspark.app.id=spark-5e3beb5109174f40a84635b786789c30
SPARK_JAVA_OPT_8: -Dspark.master= k8s://https://k8-master
SPARK_JAVA_OPT_9: -Dspark.driver.host=sparkapp-028d46fa109e309b8dfe1a4eceb46b61-driver-svc.default.svc
SPARK_JAVA_OPT_10: -Dspark.jars=/opt/spark/work-dir/SparkApp.jar,/opt/spark/work-dir/SparkApp.jar
SPARK_JAVA_OPT_11: -Dspark.driver.port=7078`
答案 1 :(得分:0)
由于没有日志,这意味着它在创建容器时立即崩溃。我建议尝试使用本地主服务器配置运行此命令,以确保火花端的所有功能均正常,然后通过kubernetes作为主服务器再次尝试。