没有用于方案的文件系统:s3d

时间:2018-07-17 20:58:28

标签: scala apache-spark hadoop

我正在尝试使用Spark在IBM Cloud Object Storage上部署文件,但是总是在尝试调用saveAsTextFile方法时出现错误

Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3d

我的代码如下(仅出于测试目的):

val sparkConf = new SparkConf().setAppName("Test").setMaster("local")
val sc = new SparkContext(sparkConf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sc.hadoopConfiguration.set("fs.s3d.service.endpoint", endPoint)
sc.hadoopConfiguration.set("fs.s3d.service.access.key", accessKey)
sc.hadoopConfiguration.set("fs.s3d.service.secret.key", secretKey)

val warehouseLocation = "file:${system:user.dir}/spark-warehouse"
val spark = SparkSession
.builder()
.appName("Test")
.config("spark.sql.warehouse.dir", warehouseLocation)
.getOrCreate()

val file = sc.textFile("src/main/resources/test.csv").map(line => line.split(","))
file.saveAsTextFile("s3d://rollup.service/result")

你们能帮我吗? 谢谢!

0 个答案:

没有答案