我正在尝试使用Spark在IBM Cloud Object Storage上部署文件,但是总是在尝试调用saveAsTextFile方法时出现错误
Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3d
我的代码如下(仅出于测试目的):
val sparkConf = new SparkConf().setAppName("Test").setMaster("local")
val sc = new SparkContext(sparkConf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sc.hadoopConfiguration.set("fs.s3d.service.endpoint", endPoint)
sc.hadoopConfiguration.set("fs.s3d.service.access.key", accessKey)
sc.hadoopConfiguration.set("fs.s3d.service.secret.key", secretKey)
val warehouseLocation = "file:${system:user.dir}/spark-warehouse"
val spark = SparkSession
.builder()
.appName("Test")
.config("spark.sql.warehouse.dir", warehouseLocation)
.getOrCreate()
val file = sc.textFile("src/main/resources/test.csv").map(line => line.split(","))
file.saveAsTextFile("s3d://rollup.service/result")
你们能帮我吗? 谢谢!