spark version:2.3.3
kubernetes version :v1.15.3
I'm getting the below exception while running spark code with kubernetes.
Even though I assigned the role and rolebinding and trying, still giving same exception. Please suggest solution if anyone had got such kind of exception.
2019-09-11 10:35:54 WARN KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:54 WARN KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:57 WARN WatchConnectionManager:185 - Exec Failure: HTTP 403, Status: 403 -
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:216)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:183)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 ERROR SparkContext:91 - Error initializing SparkContext.
io.fabric8.kubernetes.client.KubernetesClientException:
at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO AbstractConnector:318 - Stopped Spark@7c351808{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-09-11 10:35:57 INFO SparkUI:54 - Stopped Spark web UI at http://spark-pi-8ee39f55094a39cc9f6d34d8739549d2-driver-svc.default.svc:4040
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend:54 - Shutting down all executors
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint:54 - Asking each executor to shut down
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend:54 - Closing kubernetes client
2019-09-11 10:35:57 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-09-11 10:35:57 INFO MemoryStore:54 - MemoryStore cleared
2019-09-11 10:35:57 INFO BlockManager:54 - BlockManager stopped
2019-09-11 10:35:57 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2019-09-11 10:35:57 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2019-09-11 10:35:57 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-09-11 10:35:57 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException:
at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO ShutdownHookManager:54 - Shutdown hook called
I had created role and rolebinding and tried but it can't help me.
Even I did reset kubernetes and tried again by reseting it but still facing same issue.
I can't find out the solution for this on google.
Below spark submit command I'm using :
nohup bin/spark-submit --master k8s://https://192.168.154.58:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.JavaSparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=innoeye123/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar > tool.log &
/ * *根据一个或多个许可给Apache软件基金会(ASF) *贡献者许可协议。请参阅随发布的NOTICE文件 *此项工作提供有关版权所有权的更多信息。 * ASF根据Apache许可证2.0版将此文件许可给您 *(“许可证”);除非遵守以下规定,否则您不得使用此文件 *许可证。您可以在以下位置获得许可的副本: * * http://www.apache.org/licenses/LICENSE-2.0 * *除非适用法律要求或书面同意,否则软件 *根据许可协议分发的内容是按“原样”分发的, *不作任何明示或暗示的保证或条件。 *有关特定语言的管理权限,请参阅许可证 *许可中的限制。 * /
package org.apache.spark.examples;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;
import java.util.ArrayList;
import java.util.List;
/**
* Computes an approximation to pi
* Usage: JavaSparkPi [partitions]
*/
public final class JavaSparkPi {
public static void main(String[] args) throws Exception {
SparkSession spark = SparkSession
.builder()
.appName("JavaSparkPi")
.getOrCreate();
JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());
int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
int n = 100000 * slices;
List<Integer> l = new ArrayList<>(n);
for (int i = 0; i < n; i++) {
l.add(i);
}
JavaRDD<Integer> dataSet = jsc.parallelize(l, slices);
int count = dataSet.map(integer -> {
double x = Math.random() * 2 - 1;
double y = Math.random() * 2 - 1;
return (x * x + y * y <= 1) ? 1 : 0;
}).reduce((integer, integer2) -> integer + integer2);
System.out.println("Pi is roughly " + 4.0 * count / n);
spark.stop();
}
}
Expected result : spark-submit command should run smoothly and terminate it successfully by creating a successful pod.
答案 0 :(得分:0)
看起来像已报告的问题SPARK-28921,其中涉及到受影响的Spark版本
检查您是否正在使用以上其中一种
此问题可在
中找到您可能需要升级
答案 1 :(得分:0)
Spark 2.4.4或带有k8s 1.18的Spark 3.0.0对我来说有相同的错误。
您可以尝试使用http而不是https API。
转到您的Master k8s并创建一个HTTP访问点:
kubectl proxy --address=ip_your_master_k8s --port=port_what_you_want --accept-hosts='^*' --accept-paths='^.*' --disable-filter=true
然后:
nohup bin/spark-submit --master k8s://https://192.168.154.58:port_what_you_want --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.JavaSparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=innoeye123/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar > tool.log &