我正在尝试使用新API(0.20.2)编写简单的map reduce程序来查找最大的素数。这就是我的Map和reduce类的样子......
public class PrimeNumberMap extends Mapper<LongWritable, Text, IntWritable, IntWritable> {
public void map (LongWritable key, Text Kvalue,Context context) throws IOException,InterruptedException
{
Integer value = new Integer(Kvalue.toString());
if(isNumberPrime(value))
{
context.write(new IntWritable(value), new IntWritable(new Integer(key.toString())));
}
}
boolean isNumberPrime(Integer number)
{
if (number == 1) return false;
if (number == 2) return true;
for (int counter =2; counter<(number/2);counter++)
{
if(number%counter ==0 )
return false;
}
return true;
}
}
public class PrimeNumberReduce extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable> {
public void reduce ( IntWritable primeNo, Iterable<IntWritable> Values,Context context) throws IOException ,InterruptedException
{
int maxValue = Integer.MIN_VALUE;
for (IntWritable value : Values)
{
maxValue= Math.max(maxValue, value.get());
}
//output.collect(primeNo, new IntWritable(maxValue));
context.write(primeNo, new IntWritable(maxValue)); }
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException{
if (args.length ==0)
{
System.err.println(" Usage:\n\tPrimenumber <input Directory> <output Directory>");
System.exit(-1);
}
Job job = new Job();
job.setJarByClass(Main.class);
job.setJobName("Prime");
// Creating job configuration object
FileInputFormat.addInputPath(job, new Path (args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
String star ="*********************************************";
System.out.println(star+"\n Prime number computer \n"+star);
System.out.println(" Application started ... keeping fingers crossed :/ ");
System.exit(job.waitForCompletion(true)?0:1);
}
}
我仍然收到有关地图密钥不匹配的错误
java.io.IOException:键入map中的键不匹配:expected org.apache.hadoop.io.IntWritable,recieved org.apache.hadoop.io.LongWritable at org.apache.hadoop.mapred.MapTask $ MapOutputBuffer.collect(MapTask.java:1034) at org.apache.hadoop.mapred.MapTask $ NewOutputCollector.write(MapTask.java:595) at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80) 在org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124) 在org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:668) 在org.apache.hadoop.mapred.MapTask.run(MapTask.java:334) 在org.apache.hadoop.mapred.Child $ 4.run(Child.java:270) at java.security.AccessController.doPrivileged(Native Method) 在javax.security.auth.Subject.doAs(Subject.java:396) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1109) 在org.apache.hadoop.mapred.Child.main(Child.java:264) 2012-06-13 14:27:21,116 INFO org.apache.hadoop.mapred.Task:Runnning cleanup for the task
有人可以建议出现什么问题。我试过所有的钩子和骗子。
答案 0 :(得分:8)
你没有在主块中配置Mapper或reducer类,所以使用了默认的Mapper - 它被称为身份映射器 - 它输出的每一对都是输出(因此LongWritable作为输出键) ):
job.setMapperClass(PrimeNumberMap.class);
job.setReducerClass(PrimeNumberReduce.class);
答案 1 :(得分:0)
映射器应定义如下,
public class PrimeNumberMap extends Mapper<**IntWritable**, Text, IntWritable, IntWritable> {
而不是
public class PrimeNumberMap extends Mapper<LongWritable, Text, IntWritable, IntWritable> {
正如在评论中提到的那样,你应该定义mapper和reducer。
job.setMapperClass(PrimeNumberMap.class);
job.setReducerClass(PrimeNumberReduce.class);
请参阅Hadoop权威指南第3版,第2章,第24页
答案 2 :(得分:0)
我是hadoop mapreduce计划的新手。
映射时,我使用IntWritable
但我减少了IntWritable
格式的值,并在上下文写入中使用DoubleWritable
之前将结果转换为double。
运行时失败。
我在map中处理covert int的方法是double in double:
Mapper(LongWritable,Text,Text,DoubleWritable)
Reducer(Text,DoubleWritable,Text,DoubleWritable)
job.setOutputValueClass(DoubleWritable.Class)