Spark 2.0 with spark.read.text索引3处的预期方案特定部分:s3:错误

时间:2017-03-28 16:44:36

标签: apache-spark amazon-s3 apache-spark-2.0

我遇到了一个奇怪的问题,使用sparksession来加载文本文件。目前我的spark配置如下:

val sparkConf = new SparkConf().setAppName("name-here")
sparkConf.registerKryoClasses(Array(Class.forName("org.apache.hadoop.io.LongWritable"), Class.forName("org.apache.hadoop.io.Text")))
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
val spark = SparkSession.builder()
    .config(sparkConf)
    .getOrCreate()
spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.enableServerSideEncryption", "true")
spark.sparkContext.hadoopConfiguration.set("mapreduce.fileoutputcommitter.algorithm.version", "2")

如果我通过rdd加载s3a文件,它可以正常工作。但是,如果我尝试使用类似的东西:

    val blah = SparkConfig.spark.read.text("s3a://bucket-name/*/*.txt")
        .select(input_file_name, col("value"))
        .drop("value")
        .distinct()
    val x = blah.collect()
    println(blah.head().get(0))
    println(x.size)

我得到一个例外:java.net.URISyntaxException: Expected scheme-specific part at index 3: s3:

我是否需要为sqlcontext或sparksession添加一些额外的s3a配置?我还没有找到任何问题或在线资源来指明这一点。奇怪的是,似乎作业运行了10分钟,但随后失败了。再次,使用相同的桶和所有东西,定期加载rdd没有问题。

另一个奇怪的事情是它抱怨s3而不是s3a。我有三重检查我的前缀,它总是说s3a。

编辑:同时检查s3a和s3,两者都抛出相同的异常。

17/04/06 21:29:14 ERROR ApplicationMaster: User class threw exception: 
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.hadoop.fs.Path.<init>(Path.java:93)
at org.apache.hadoop.fs.Globber.glob(Globber.java:240)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1732)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.globPath(SparkHadoopUtil.scala:237)
at org.apache.spark.deploy.SparkHadoopUtil.globPathIfNecessary(SparkHadoopUtil.scala:243)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:374)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:506)
at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:486)
at com.omitted.omitted.jobs.Omitted$.doThings(Omitted.scala:18)
at com.omitted.omitted.jobs.Omitted$.main(Omitted.scala:93)
at com.omitted.omitted.jobs.Omitted.main(Omitted.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: java.net.URISyntaxException: Expected scheme-specific part 
at index 3: s3:
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.failExpecting(URI.java:2854)
at java.net.URI$Parser.parse(URI.java:3057)
at java.net.URI.<init>(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 26 more
17/04/06 21:29:14 INFO ApplicationMaster: Final app status: FAILED, 
exitCode: 15, (reason: User class threw exception: 
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:)

1 个答案:

答案 0 :(得分:0)

这应该有用。

  • 在你的CP上获得正确的JAR(Spark with Hadoop 2.7,匹配hadoop-aws JAR,aws-java-sdk-1.7.4.jar(正是这个版本)和joda-time-2.9.3.jar(或后来的遗产)
  • 您不需要设置fs.s3a.impl值,就像在hadoop默认设置中完成的那样。如果你确实发现自己这样做,那就是问题的一个标志。

什么是完整的堆栈跟踪?