在下面有一个map-reduce程序计算几个文本文件的单词。 我的目的是让结果按照出现量的降序排列。
不幸的是,程序通过键按字典顺序对输出进行排序。我想要一个整数值的自然顺序。
所以我添加了一个job.setSortComparatorClass(IntComparator.class)
的自定义比较器。但这并不像预期的那样有效。我遇到以下异常:
java.lang.Exception: java.nio.BufferUnderflowException
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:404)
Caused by: java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:498)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:355)
at WordCount$IntComparator.compare(WordCount.java:128)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:987)
at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:100)
at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:64)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1277)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1174)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:609)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:675)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
任何帮助将不胜感激! :)
我列出了下面的整个程序,因为可能有一个我不明白的例外原因。如您所见,我正在使用新的mapreduce api(org.apache.hadoop.mapreduce.*
)。
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparator;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
/**
* Counts the words in several text files.
*/
public class WordCount {
/**
* Maps lines of text to (word, amount) pairs.
*/
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text word = new Text();
private IntWritable amount = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String textLine = value.toString();
StringTokenizer tokenizer = new StringTokenizer(textLine);
while (tokenizer.hasMoreElements()) {
word.set((String) tokenizer.nextElement());
context.write(word, amount);
}
}
}
/**
* Reduces (word, amount) pairs to (amount, word) list.
*/
public static class Reduce extends
Reducer<Text, IntWritable, IntWritable, Text> {
private IntWritable amount = new IntWritable();
private int sum;
@Override
protected void reduce(Text key, Iterable<IntWritable> valueList,
Context context) throws IOException, InterruptedException {
sum = 0;
for (IntWritable value : valueList) {
sum += value.get();
}
amount.set(sum);
context.write(amount, key);
}
}
public static class IntComparator extends WritableComparator {
public IntComparator() {
super(IntWritable.class);
}
private Integer int1;
private Integer int2;
@Override
public int compare(byte[] raw1, int offset1, int length1, byte[] raw2,
int offset2, int length2) {
int1 = ByteBuffer.wrap(raw1, offset1, length1).getInt();
int2 = ByteBuffer.wrap(raw2, offset2, length2).getInt();
return int2.compareTo(int1);
}
}
/**
* Job configuration.
*
* @param args
* @throws IOException
* @throws ClassNotFoundException
* @throws InterruptedException
*/
public static void main(String[] args) throws IOException,
ClassNotFoundException, InterruptedException {
Path inputPath = new Path(args[0]);
Path outputPath = new Path(args[1]);
Configuration configuration = new Configuration();
configuration.addResource(new Path("/etc/hadoop/conf/core-site.xml"));
Job job = new Job(configuration);
job.setJobName("WordCount");
job.setJarByClass(WordCount.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setSortComparatorClass(IntComparator.class);
FileInputFormat.setInputPaths(job, inputPath);
FileSystem.get(configuration).delete(outputPath, true);
FileOutputFormat.setOutputPath(job, outputPath);
job.waitForCompletion(true);
}
}
答案 0 :(得分:1)
比较器步骤发生在Mapper
和Reducer
之间,当你在Reducer
本身交换密钥和值时,这对你不起作用。
如果密钥为WritableComparator
,则默认IntWritable
通常会处理您的数字排序,除非它获得Text
密钥,从而导致词典排序。
至于为什么最后的输出并没有按你写出的IntWritable
键排序,我不确定。也许它与TextOutputFormat
的工作方式有关?您可能需要深入研究TextOutputFormat
源代码以获取线索,但简而言之,设置排序比较器可能无法帮助您,我担心。
答案 1 :(得分:1)
由于quetzalcoatl表示你的比较器没用,因为它在Map和reduce阶段之间使用,而不是在Reduce阶段之后。因此,要完成此操作,您需要在cleanup
的{{1}}中进行排序,或者编写另一个程序来对reducer的输出进行排序。
答案 2 :(得分:1)
基本上,您需要按值排序。有两种方法可以实现这一目标。但总之,你需要2 map-reduce,即在第一个Map reduce的输出上运行一个map reduce。
完成法线贴图后,再将一个贴图减少,将第一个贴图的输出减少为第二个贴图的输入减少。在第二个地图缩小的地图阶段,您可以使用自定义类作为键,例如
class WordCountVo implements WritableComparable<WordCountVo>
你必须覆盖
public int compareTo(WordCountVo wodCountVo)
方法。
在WordCountVO中,您可以保留单词和计数,但仅基于计数进行比较。例如。下面是WordCountVO的成员变量
private String word;
private Long count;
现在,当您在第二个reducer中收到键值对时,您的数据将全部按值排序。您需要做的就是使用上下文编写键值对!希望这可以帮助。