每个人。我是hadoop的新手,对它很感兴趣。我读了一本书“Hadoop:The Definitive Guide”。我认为当我尝试在大约第60页运行示例ShowFileStatusTest时遇到问题。 问题是它总是对fileStatusForFile函数进行失败测试。在hdfs文件系统或本地文件系统中没有创建任何文件。因此它显示消息“文件不存在” 好吧,我在这里显示一些日志消息
> fileStatusForFile(ShowFileStatusTest)
> java.io.FileNotFoundException: File does not exist:
> /home/hadoop/setenv.sh at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114
我尝试找出消息,但我失败了。为什么它使用fs.defaultFS?为什么它是hdfs://127.0.0.1:0。我真的很困惑,因为我检查了两个文件,核心 - site.xml和core-default.xml。它将fs.defaultFS设置为file://并将fs.name.default设置为hdfs:// master:9000:
18/04/30 18:40:40 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/04/30 18:40:40 INFO namenode.NameNode: createNameNode []
18/04/30 18:40:40 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
18/04/30 18:40:40 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
18/04/30 18:40:40 INFO impl.MetricsSystemImpl: NameNode metrics system started
18/04/30 18:40:40 INFO namenode.NameNode: fs.defaultFS is hdfs://127.0.0.1:0
这是我的代码:
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
//import static org.hamcrest.*;
import org.junit.Before;
import org.junit.After;
import org.junit.Test;
import java.io.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.fs.Path;
public class ShowFileStatusTest{
private MiniDFSCluster cluster;
private FileSystem fs;
@Before
public void setUp() throws IOException {
Configuration conf = new Configuration();
// conf.set("fs.default.name","hdfs://master:9000");
if(System.getProperty("test.build.data")==null) {
System.setProperty("test.build.data","/tmp");
}
cluster = new MiniDFSCluster(conf, 1, true, null);
fs = cluster.getFileSystem();
OutputStream out = fs.create(new Path("/dir/file"));
out.write("content".getBytes("UTF-8"));
out.close();
}
@After
public void tearDown() throws IOException{
if( fs!=null ){fs.close();}
if( cluster!=null ) {cluster.shutdown();}
}
@Test(expected=FileNotFoundException.class)
public void throwsFileNotFoundForNonExistentFile() throws IOException{
fs.getFileStatus(new Path("no-such-file"));
}
@Test
public void fileStatusForFile() throws IOException{
Path file = new Path("/home/hadoop/setenv.sh");
FileStatus stat = fs.getFileStatus(file);
assertThat(stat.getPath().toUri().getPath(),is("/home/hadoop/setenv.sh"));
assertThat(stat.isDir(),is(false));
assertThat(stat.getLen(),is(7L));
// assertThat(stat.getModificationTime(),
// is(lessThanOrEqualTo(System.currentTimeMillis()))
// );
assertThat(stat.getReplication(),is((short)1) );
assertThat(stat.getBlockSize(),is(64*124*1024));
assertThat(stat.getOwner(),is("hadoop"));
assertThat(stat.getGroup(),is("supergroup"));
assertThat(stat.getPermission().toString(),is("rw-r--r--"));
}
}
嗯,不是母语人士,所以我很抱歉我的英语很差。但我会尽力让你理解。感谢
答案 0 :(得分:0)
尝试在代码中进行这些更改并再次运行测试:
第27行:OutputStream out = fs.create(new Path("/dir/file"));
到OutputStream out = fs.create(new Path("/home/hadoop/setenv.sh"));
第54行:assertThat(stat.getBlockSize(),is(64*124*1024));
到assertThat(stat.getBlockSize(),is(128*1024*1024L));
(如果测试因所有者而失败)第55行:assertThat(stat.getOwner(),is("hadoop"));
至assertThat(stat.getOwner(),is("<os-user-running-the-test>"));
答案 1 :(得分:0)
我想我已经解决了我的问题。这是我的最终代码。
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
//import static org.hamcrest.*;
import org.junit.Before;
import org.junit.After;
import org.junit.Test;
import java.io.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.fs.Path;
public class ShowFileStatusTest{
private MiniDFSCluster cluster;
private FileSystem fs;
@Before
public void setUp() throws IOException {
Configuration conf = new Configuration();
// conf.set("fs.default.name","hdfs://master:9000");
if(System.getProperty("test.build.data")==null) {
System.setProperty("test.build.data","/tmp");
}
cluster = new MiniDFSCluster(conf, 1, true, null);
fs = cluster.getFileSystem();
OutputStream out = fs.create(new Path("/dir/file"));
out.write("content".getBytes("UTF-8"));
out.close();
}
@After
public void tearDown() throws IOException{
if( fs!=null ){fs.close();}
if( cluster!=null ) {cluster.shutdown();}
}
@Test(expected=FileNotFoundException.class)
public void throwsFileNotFoundForNonExistentFile() throws IOException{
fs.getFileStatus(new Path("no-such-file"));
}
@Test
public void fileStatusForFile() throws IOException{
Path file = new Path("/dir/file");
FileStatus stat = fs.getFileStatus(file);
assertThat(stat.getPath().toUri().getPath(),is("/dir/file"));
assertThat(stat.isDir(),is(false));
assertThat(stat.getLen(),is(7L));
// assertThat(stat.getModificationTime(),
// is(lessThanOrEqualTo(System.currentTimeMillis()))
// );
assertThat(stat.getReplication(),is((short)1) );
assertThat(stat.getBlockSize(),is(128*1024*1024L));
assertThat(stat.getOwner(),is("hadoop"));
assertThat(stat.getGroup(),is("supergroup"));
assertThat(stat.getPermission().toString(),is("rw-r--r--"));
}
@Test
public void fileStatusForDirectory() throws IOException{
Path dir = new Path("/dir");
FileStatus stat = fs.getFileStatus(dir);
assertThat(stat.getPath().toUri().getPath(),is("/dir"));
assertThat(stat.isDir(),is(true));
assertThat(stat.getLen(),is(0L));
// assertThat(stat.getModificationTime());
assertThat(stat.getReplication(),is((short)0));
assertThat(stat.getBlockSize(),is(0L));
assertThat(stat.getOwner(),is("hadoop"));
assertThat(stat.getGroup(),is("supergroup"));
assertThat(stat.getPermission().toString(),is("rwxr-xr-x"));
}
}
感谢Jagrut Sharma。我已将密钥代码assertThat(stat.getBlockSize(),is(64*124*1024));
更改为assertThat(stat.getBlockSize(),is(128*1024*1024L));
没有失败测试我的问题解决了。通过这个例子,我认为主要的问题是我总是误解日志消息。当我使用assertThat(stat.getBlockSize(),is(64*124*1024));
的旧代码时,我总是得到声明错误的日志消息:
java.lang.NoSuchMethodError: org.hamcrest.Matcher.describeMismatch(Ljava/lang/Object;Lorg/hamcrest/Description;)V
这条消息让我很困惑。我不知道它意味着我的错误断言。但是,我误解了它,我把它当作我不正当使用某些功能的错误。更重要的是,我理所当然地认为在本地文件系统或hdfs中应该有一个文件“/ dir / file”,因为我不知道在这个例子中使用MiniDFSCluster。不管怎样,我应该做的是了解更多...