我正在尝试为每个重复名称添加数字。但是,我将名称和数字分开,但是我不知道如何添加数字。如果您需要更多信息以帮助您,请告诉我。
先谢谢您。
到目前为止,这是我的代码:
package hadoop.names;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.File;
import java.io.IOException;
import java.util.Iterator;
import org.apache.commons.io.FileUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class names_app {
public static class GroupMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
/** The name. */
Text nameText = new Text();
/** The count text. */
IntWritable count = new IntWritable();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] keyvalue = line.split(",");
nameText.set(new Text(keyvalue[3]));
count.set(Integer.parseInt(keyvalue[4]));
context.write(nameText, count);
}
}
public static class GroupReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, Context context) throws IOException,
InterruptedException {
int n = 0;
while (values.hasNext()) {
n = n + values.next().get();
}
context.write(key, new IntWritable(n));
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
FileUtils.deleteDirectory(new File("/output/names"));
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "GroupMR");
job.setJarByClass(names_app.class);
job.setMapperClass(GroupMapper.class);
job.setReducerClass(GroupReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.setMaxInputSplitSize(job, 10);
FileInputFormat.setMinInputSplitSize(job, 100);
FileInputFormat.addInputPath(job, new Path("/input_data/Sample_of_names.csv"));
FileOutputFormat.setOutputPath(job, new Path("/output/names"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
输入样本:
NJ,F,1910,Mary,593
NJ,F,1910,Helen,438
NJ,F,1910,Anna,355
NJ,F,1910,Margaret,311
NJ,F,1910,Elizabeth,260
NJ,F,1910,Dorothy,255
NJ,F,1910,Rose,201
NJ,F,1910,Ruth,188
NJ,F,1910,Mildred,174
NJ,F,1910,Florence,169
NJ,F,1910,Catherine,158
NJ,F,1910,Marie,152
NJ,F,1910,Lillian,130
NJ,F,1910,Alice,125
NJ,F,1910,Frances,124
链接到原始数据集:https://www.kaggle.com/datagov/usa-names
我将以下输出作为csv:
Aaliyah,5
Aaron,14
Aaron,22
Aaron,11
Aaron,17
Aaron,24
Aaron,12
Aaron,241
Aaron,9
Aaron,11
Aaron,199
Aaron,16
Abbey,5
Abbie,5
Abbie,5
Abbie,5
我想要:
Aaliyah,5
Aaron,576
Abbey,5
Abbie,15
答案 0 :(得分:0)
由于某些原因,您正在Hadoop中使用默认的reducer,即identityReducer
。我认为是因为您的reduce
函数中有一个错字,所以调用了其父类的reduce函数而不是您创建的错字。要避免此问题,明智的做法是在Java中使用@Override
。尝试在@Override
函数中编写reduce
并重新编译。
根据Hadoop's source code,默认的reducer如下所示:
/** Writes all keys and values directly to output. */
public void reduce(K key, Iterator<V> values,
OutputCollector<K, V> output, Reporter reporter)
throws IOException {
while (values.hasNext()) {
output.collect(key, values.next());
}
}
基本上,它会溢出映射器的输出。
只需使用给定的LongSumReducer
Hadoop,该Hadoop基本上会执行您想要执行的reduce操作,因此在您的主要函数中编写:
job.setReducerClass(LongSumReducer<Text>.class);
包裹的位置在org.apache.hadoop.mapred.lib.LongSumReducer