我正在运行mapreduce代码,我得到的错误是
Error: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.IntWritable
at test.temp$Mymapper.map(temp.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
代码如下:
package test;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
//import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class temp {
public static class Mymapper extends Mapper<Object, Text, IntWritable,Text> {
public void map(Object key, Text value,Context context) throws IOException, InterruptedException{
int month=Integer.parseInt(value.toString().substring(17, 19));
IntWritable mon=new IntWritable(month);
String temp=value.toString().substring(27,31);
String t=null;
for(int i=0;i<temp.length();i++){
if(temp.charAt(i)==',')
break;
else
t=t+temp.charAt(i);
}
Text data=new Text(value.toString().substring(22, 26)+t);
context.write(mon, data);
}
}
public static class Myreducer extends Reducer<IntWritable,Text,IntWritable,IntWritable> {
public void reduce(IntWritable key,Iterable<Text> values,Context context) throws IOException, InterruptedException{
String temp="";
int max=0;
for(Text t:values)
{
temp=t.toString();
if(temp.substring(0, 4)=="TMAX"){
if(Integer.parseInt(temp.substring(4,temp.length()))>max){
max=Integer.parseInt(temp.substring(4,temp.length()));
}
}
}
context.write(key,new IntWritable(max));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "temp");
job.setJarByClass(temp.class);
job.setMapperClass(Mymapper.class);
job.setCombinerClass(Myreducer.class);
job.setReducerClass(Myreducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
,输入文件为
USC00300379,19000101,TMAX,-78 ,,, 6, USC00300379,19000101,TMAX,-133 ,,, 6, USC00300379,19000101,TMAX,127 ,,, 6
请回复并帮助!
答案 0 :(得分:0)
认为您正在使用TextInputFormat作为作业的输入格式。这会生成LongWritable / Text,而Hadoop则从中派生map-output类。
尝试显式设置地图输出类并删除组合器:
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
// job.setCombinerClass(Myreducer.class);
只有map和reduce输出兼容,才能使用合并器!
答案 1 :(得分:0)
您已在驱动程序中设置以下内容:
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
这意味着,mapper和reducer输出键类应为IntWritable
,值类应为IntWritable
。
减速机很好:
public static class Myreducer extends Reducer<IntWritable,Text,IntWritable,IntWritable>
此处输出键和值均为IntWritable
。
问题在于映射器:
public static class Mymapper extends Mapper<Object, Text, IntWritable,Text>
此处输出密钥类为IntWritable
。但是,输出值类为Text
(预计为IntWritable
)。
如果映射器的输出键/值类与reducer的输出键/值类不同,那么您需要向驱动程序显式添加以下语句:
setMapOutputKeyClass();
setMapOutputValueClass();
在您的代码中进行以下更改:
设置地图输出键和值类:在您的情况下,由于您的mapper和reducer输出键和值类不同,您需要设置以下内容:
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
禁用合并器:由于您使用Reducer
Combiner
代码,Combiner
的输出将为Intwritable
和IntWritable
。但是,Reducer
期望输入为IntWritable
和Text
。因此,您将获得以下异常,因为它的值为IntWritable
而不是Text
:
Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.IntWritable is not class org.apache.hadoop.io.Text
要删除此错误,您需要停用Combiner
:
job.setCombinerClass(Myreducer.class);
不要将reducer用作组合器:如果您肯定需要使用组合器,则编写一个输出键/值为IntWritable
且{{1}的组合器}}。
答案 2 :(得分:0)
在驱动程序中设置以下内容时
output
它定义了mapper和reducer的connect.write(IntWritable, IntWritable)
类,而不仅仅是reducer。
这意味着您的映射器应该有connect.write(IntWritable, Text)
,但您已编码job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
。
修复:当地图输出类型与reduce输出不同时,您需要显式设置mapper的输出类型。因此,请在驱动程序代码中添加以下内容。
singular_slug
答案 3 :(得分:0)
这是我所做的改变。
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "temp");
job.setJarByClass(Temp.class);
job.setMapperClass(Mymapper.class);
job.setReducerClass(Myreducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setNumReduceTasks(1);
job.waitForCompletion(true);
}
输出: 10 0
有关解释请遵循Manjunath Ballur的帖子。