线程" main"中的例外情况java.io.IOException:无法初始化Cluster

时间:2016-08-09 07:50:17

标签: eclipse hadoop jar mapreduce

我正试图在Windows上的eclipse上运行一个简单的Hadoop Map reduce程序。我得到以下例外。

Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1188)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1184)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1183)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1212)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
at com.hadoop.mapreduce.WordCountDriverClass.main(WordCountDriverClass.java:41)

这些是我在项目中添加的jar文件。

com.google.guava_1.6.0.jar
commons-configuration-1.7.jar
commons-lang-2.6.jar
commons-logging-1.1.3.jar
commons.collections-3.2.1.jar
guava-13.0.1.jar
hadoop-annotations-2.7.2.jar
hadoop-auth-2.6.0.jar
hadoop-common-2.3.0.jar
hadoop-common.jar
hadoop-mapreduce-client-core-2.0.2-alpha.jar
hadoop-mapreduce-client-core-2.7.2.jar
hadoop-mapreduce-client-jobclient-2.2.0.jar
hadoop-test-1.2.1.jar
log4j-1.2.17.jar
slf4j-api-1.7.7.jar
slf4j-simple-1.6.1.jar

我在检查控制台中的异常消息后添加了这些jar文件。但我无法理解这个例外。 任何人都可以帮我解决这个问题。

这是我的司机班。

        Configuration conf = new Configuration();

        // Creating a job
        Job job = Job.getInstance(conf,"WordCountDriverClass");
        job.setJarByClass(WordCountDriverClass.class);
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        job.setNumReduceTasks(2);
        job.setInputFormatClass(KeyValueTextInputFormat.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path("inputfiles"));
        FileOutputFormat.setOutputPath(job, new Path("outputfiles"));

        job.waitForCompletion(true);

1 个答案:

答案 0 :(得分:0)

看起来你正在运行wordcount示例,它需要的是1.2.1 hadoop-core和2.2.0 hadoop-common。如果您使用Maven,配置应该像

一样简单
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-core</artifactId>
    <version>1.2.1</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>2.2.0</version>
</dependency>