我正在尝试编写单元测试来断言Spark和HDFS的交互。以下是@BeforeClass设置:
Configuration hdfsConfiguration = new HdfsConfiguration();
File testDir = new File("./target/hdfs/").getAbsoluteFile();
FileUtil.fullyDelete(testDir);
hdfsConfiguration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, testDir.getAbsolutePath());
MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(hdfsConfiguration);
MiniDFSCluster hdfsCluster = builder.build();
FileSystem fileSystem = FileSystem.get(hdfsConfiguration);
POM级别依赖性如下:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-minicluster</artifactId>
<version>2.7.0</version>
<scope>test</scope>
</dependency>
我在这一行上失败了:
MiniDFSCluster hdfsCluster = builder.build();
错误堆栈:
java.lang.NoSuchMethodError: org.apache.hadoop.tracing.SpanReceiverHost.getInstance(Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/tracing/SpanReceiverHost;
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:641)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:810)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:794)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1115)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:986)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:815)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:475)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:434)
到目前为止的调查:
我已运行mvndependency:tree来确认hadoop-hdfs库版本为2.6 *以上(已确认传递依赖项)。
还在〜/ .m2 /存储库中进行了检查,并清理了org / apache / hadoop目录,以确保那里没有陈旧的版本。
任何帮助/指针将不胜感激。如果能够解决此问题,我会发布更新。