无法使用Spark 2.4.4从Google Dataproc访问AWS存储桶

时间:2020-01-21 06:21:26

标签: scala apache-spark amazon-s3 sbt

我正在使用以下代码访问存储在S3上的某些文件:

val spark = SparkSession.builder()
      .enableHiveSupport()
      .config("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
      .config("fs.s3n.awsAccessKeyId", <Access_key>)
      .config("fs.s3n.awsSecretAccessKey", <SecretAccessKey>)
      .getOrCreate()


val df = spark.read.orc(<s3 bucket name>)

我在val df = spark.read.orc()行出现以下错误

Caused by: java.lang.ClassNotFoundException: org.jets3t.service.S3ServiceException
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

我正在使用以下命令运行火花罐:

gcloud dataproc jobs submit spark \
    --project <project_name> \
    --region <region> \
    --cluster <cluster name> \
    --class <main class> \
    --properties spark.jars.packages='net.java.dev.jets3t:jets3t:0.9.4' \
    --jars gs://<bucket_name>/jars/sample_s3-assembly-0.1.jar,gs://<bucket_name>/jars/jets3t-0.9.4.jar

我的sbt文件看起来是:

name := "sample_s3_ht"

version := "0.1"

scalaVersion := "2.11.12"


resolvers += Opts.resolver.sonatypeReleases
libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "2.7.1"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.4.0"
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "1.80.0"
libraryDependencies += "com.google.cloud" % "google-cloud-storage" % "1.98.0"
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery" % "0.7.0-beta"
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "1.6.1-hadoop2"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0"
libraryDependencies += "net.java.dev.jets3t" % "jets3t" % "0.9.4"


assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

我还提供了sbt中的jets3t,hadoop-aws和hadoop-client条目,如另一个线程所述,尽管我仍然遇到上述错误。

0 个答案:

没有答案