使用Spark / Hadoop从S3读取时出错

时间:2013-08-31 02:30:16

标签: hadoop amazon-s3 apache-spark jets3t

我正在尝试使用Spark从Amazon S3读取数据。 但我得到了

java.lang.NoClassDefFoundError: org/jets3t/service/S3ServiceException

来自Hadoop调用内部。 我尝试了dwnloading jets3t并将所有包含的jar添加到我的类路径中, 但它没有帮助。 以下是正在发生的事情的完整记录:

scala> val zz = sc.textFile("s3n:/<bucket>/<path>")
13/08/30 19:50:21 INFO storage.MemoryStore: ensureFreeSpace(45979) called with curMem=46019, maxMem=8579469803
13/08/30 19:50:21 INFO storage.MemoryStore: Block broadcast_1 stored as values to memory (estimated size 44.9 KB, free 8.0 GB)
zz: spark.RDD[String] = MappedRDD[3] at textFile at <console>:12

scala> zz.first
13/08/30 19:50:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/08/30 19:50:38 WARN snappy.LoadSnappy: Snappy native library not loaded
java.lang.NoClassDefFoundError: org/jets3t/service/S3ServiceException
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:224)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:214)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:176)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:76)
at spark.RDD.partitions(RDD.scala:214)
at spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:26)
at spark.RDD.partitions(RDD.scala:214)
at spark.RDD.take(RDD.scala:764)
at spark.RDD.first(RDD.scala:778)

1 个答案:

答案 0 :(得分:1)

运行Hadoop作业时,必须设置Hadoop类路径环境变量。通常,这是在Hadoop启动脚本中完成的。

export HADOOP_CLASSPATH=/path/to/yourlib:/path/to/your/other/lib

替换:用;如果你在窗户上。