我正在尝试将Java数据从AWS s3读取到Java中的数据集/ rdd,但得到Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
。我在IntelliJ上用Java运行Spark代码,因此在pom.xml中也添加了Hadoop依赖项
下面是我的代码和pom.xml文件。
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.api.java.JavaSparkContext;
public class SparkJava {
public static void main(String[] args){
SparkSession spark = SparkSession
.builder()
.master("local")
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") .config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("fs.s3n.awsAccessKeyId", AWS_KEY)
.config("fs.s3n.awsSecretAccessKey", AWS_SECRET_KEY)
.getOrCreate();
JavaSparkContext sc = new JavaSparkContext(spark.sparkContext());
String input_path = "s3a://bucket/2018/07/28/zqa.parquet";
Dataset<Row> dF = spark.read().load(input_path); // THIS LINE CAUSES ERROR
}
}
这是pom.xml中的依赖项
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-aws</artifactId>
<version>3.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.1.1</version>
</dependency>
</dependencies>
任何帮助将不胜感激。预先感谢!
答案 0 :(得分:1)
通过添加以下依赖项来解决此问题:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.1</version>
</dependency>