解决方案:使用更好的教程 - http://hadoop.apache.org/mapreduce/docs/r0.22.0/mapred_tutorial.html
我刚刚开始使用MapReduce,我遇到了一个我无法通过Google回答的奇怪错误。我正在制作一个基本的WordCount程序,但是当我运行它时,我在Reduce期间遇到以下错误:
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.mapred.Reducer.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:485)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
WordCount程序是Apache MapReduce教程中的程序。我在Mountain Lion上以伪分布式模式运行Hadoop 1.0.3,我认为所有这些都正常工作,因为这些示例都正常执行。有什么想法吗?
编辑:这是我的参考代码:
package mrt;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
throws IOException{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreTokens()){
word.set(tokenizer.nextToken());
output.collect(word,one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException{
int sum = 0;
while(values.hasNext()){
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception{
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("Wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reducer.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
答案 0 :(得分:8)
问题不是您选择的API。完全支持stable(mapred。*)和不断发展的(mapreduce。*)API,框架本身也会对两者进行测试,以确保不会在各版本之间出现回归/破坏。
问题在于这一行:
conf.setReducerClass(Reducer.class);
当你应该设置Reducer接口的实现时,你将Reducer接口设置为Reducer。将其更改为:
conf.setReducerClass(Reduce.class);
将修复它。
答案 1 :(得分:1)
检查以确保您使用的是hadoop.mapreduce包而不是hadoop.mapred包。 mapred包较旧,并且在类上具有与当前版本mapreduce类不同的方法。