我的Hadoop版本是 - 2.6.0 -cdh5.10.0 我正在使用Cloudera Vm。
我试图通过我的代码访问hdfs文件系统来访问文件并将其添加为输入或缓存文件。
当我尝试通过命令行访问hdfs文件时,我能够列出文件。
命令:
[cloudera@quickstart java]$ hadoop fs -ls hdfs://localhost:8020/user/cloudera
Found 5items
-rw-r--r-- 1 cloudera cloudera 106 2017-02-19 15:48 hdfs://localhost:8020/user/cloudera/test
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:42 hdfs://localhost:8020/user/cloudera/test_op
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:49 hdfs://localhost:8020/user/cloudera/test_op1
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:12 hdfs://localhost:8020/user/cloudera/wc_output
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:16 hdfs://localhost:8020/user/cloudera/wc_output1

当我尝试通过我的map reduce程序访问同一个东西时,我收到了File Not Found异常。 我的Map reduce示例配置代码是:
public int run(String[] args) throws Exception {
Configuration conf = getConf();
if (args.length != 2) {
System.err.println("Usage: test <in> <out>");
System.exit(2);
}
ConfigurationUtil.dumpConfigurations(conf, System.out);
LOG.info("input: " + args[0] + " output: " + args[1]);
Job job = Job.getInstance(conf);
job.setJobName("test");
job.setJarByClass(Driver.class);
job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
job.addCacheFile(new Path("hdfs://localhost:8020/user/cloudera/test/test.tsv").toUri());
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean result = job.waitForCompletion(true);
return (result) ? 0 : 1;
}
&#13;
上面代码段中的job.addCacheFile行返回FileNotFound Exception。
2)我的第二个问题是:
我在core-site.xml的条目指向localhost:9000,用于默认的hdfs文件系统URI。但是在命令提示符下,我只能在端口8020而不是9000访问默认的hdfs文件系统。当我尝试使用时端口9000,我最终得到了ConnectionRefused Exception。我不确定从哪里读取配置。
我的core-site.xml如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<!--
<property>
<name>hadoop.tmp.dir</name>
<value>/Users/student/tmp/hadoop-local/tmp</value>
<description>A base for other temporary directories.</description>
</property>
-->
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>Default file system URI. URI:scheme://authority/path scheme:method of access authority:host,port etc.</description>
</property>
</configuration>
&#13;
我的hdfs-site.xml如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/tmp/hdfs/name</value>
<description>Determines where on the local filesystem the DFS name
node should store the name table(fsimage).</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/tmp/hdfs/data</value>
<description>Determines where on the local filesystem an DFS data node should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.Usually 3, 1 in our case
</description>
</property>
</configuration>
&#13;
我收到以下异常:
java.io.FileNotFoundException: hdfs:/localhost:8020/user/cloudera/test/ (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at java.io.FileReader.<init>(FileReader.java:58)
at hadoop.TestDriver$ActorWeightReducer.setup(TestDriver.java:104)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:168)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
&#13;
任何帮助都会有用!
答案 0 :(得分:0)
您不需要提供完整路径作为从hdfs访问文件的参数。它上面的Namenode(来自core-site.xml)将添加hdfs:// host_address的前缀。您只需要提及要访问的文件以及您的案例中的目录结构/user/cloudera/test
。
来到你的2个问题端口没有8020是hdfs的默认端口。这就是为什么你能够在端口8020访问hdfs,即使你没有提到它。连接被拒绝异常的原因是因为hdfs在8020开始,这就是为什么端口9000不期望任何请求因此它拒绝连接。
有关默认端口的详细信息,请参阅here