SparkContext:在MapR沙盒上初始化SparkContext时出错

时间:2018-12-19 08:27:05

标签: java scala apache-spark mapr

我尝试运行使用MapR的this示例项目。
我尝试在沙箱中以及下面的行中执行ml.Flight类

val spark: SparkSession = SparkSession.builder().appName("churn").getOrCreate()

我收到此错误。

[user01@maprdemo ~]$ spark-submit --class ml.Flight --master local[2] spark-ml-flightdelay-1.0.jar
Warning: Unable to determine $DRILL_HOME
18/12/19 05:39:09 WARN Utils: Your hostname, maprdemo.local resolves to a loopback address: 127.0.0.1; using 10.0.3.1 instead (on interface enp0s3)
18/12/19 05:39:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/12/19 05:39:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/12/19 05:39:28 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:656)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:709)
    at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1419)
    at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1093)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:522)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
    at ml.Flight$.main(Flight.scala:37)
    at ml.Flight.main(Flight.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:899)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:137)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:650)
    ... 22 more

我是Scala / Spark的新手,欢迎任何帮助。预先感谢。

1 个答案:

答案 0 :(得分:0)

我认为您正在使用或导出其他python版本的spark-submit。

例如:

/opt/mapr/spark/spark-2.3.1/bin/spark-submit