在hadoop中,如何通过initialize方法初始化DistributedFileSystem对象?

时间:2011-01-18 20:39:08

标签: java hadoop

有两个参数,URI和配置。我假设客户端设置的JobConf对象应该适用于Configuration,但是URI呢?

以下是我为驱动程序提供的代码:

JobClient client = new JobClient();
JobConf conf = new JobConf(ClickViewSessions.class);

conf.setJobName("ClickViewSessions");

conf.setOutputKeyClass(LongWritable.class);
conf.setOutputValueClass(MinMaxWritable.class);

FileInputFormat.addInputPath(conf, new Path("input"));
FileOutputFormat.setOutputPath(conf, new Path("output"));

conf.setMapperClass(ClickViewSessionsMapper.class);
conf.setReducerClass(ClickViewSessionsReducer.class);

client.setConf(conf);

DistributedFileSystem dfs = new DistributedFileSystem();
try {
    dfs.initialize(new URI("blah") /* what goes here??? */, conf);
} catch (Exception e) {
    throw new RuntimeException(e.toString());
}

如何获取上述initialize调用的URI?

2 个答案:

答案 0 :(得分:1)

You could also use as shown below to intialize a file system

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

    public static void main(String args[]){
        try {
            Configuration conf = new Configuration();
            conf.set("fs.defaultFS", "hdfs://localhost:54310/user/hadoop/");
            FileSystem fs = FileSystem.get(conf);
            FileStatus[] status = fs.listStatus(new Path("."));
            for(int i=0;i<status.length;i++){
                System.out.println(status[i].getPath());
            }
        } catch (IOException e) {
            e.printStackTrace();
        }

    }

答案 1 :(得分:0)

URI是您正在运行的HDFS的位置。文件系统名称的默认值应位于conf / core-site.xml中。 'fs.default.name'的值应该是您连接的URI。

如果您尚未查看有关如何设置简单单节点系统的教程,我强烈推荐它:

http://hadoop.apache.org/common/docs/current/single_node_setup.html