我正在编写我的第一个MapReduce作业。简单的事情:只计算文件中的字母数字字符。我已经完成了生成我的jar文件并运行它,但除了调试输出之外,我找不到MR作业的输出。你能帮我吗?
我的申请类:
import CharacterCountMapper;
import CharacterCountReducer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class CharacterCountDriver extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
// Create a JobConf using the processed configuration processed by ToolRunner
Job job = Job.getInstance(getConf());
// Process custom command-line options
Path in = new Path("/tmp/filein");
Path out = new Path("/tmp/fileout");
// Specify various job-specific parameters
job.setJobName("Character-Count");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(CharacterCountMapper.class);
job.setReducerClass(CharacterCountReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, in);
FileOutputFormat.setOutputPath(job, out);
job.setJarByClass(CharacterCountDriver.class);
job.submit();
return 0;
}
public static void main(String[] args) throws Exception {
// Let ToolRunner handle generic command-line options
int res = ToolRunner.run(new Configuration(), new CharacterCountDriver(), args);
System.exit(res);
}
}
然后是我的mapper类:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class CharacterCountMapper extends
Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
@Override
protected void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String strValue = value.toString();
StringTokenizer chars = new StringTokenizer(strValue.replaceAll("[^a-zA-Z0-9]", ""));
while (chars.hasMoreTokens()) {
context.write(new Text(chars.nextToken()), one);
}
}
}
还原剂:
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class CharacterCountReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int charCount = 0;
for (IntWritable val: values) {
charCount += val.get();
}
context.write(key, new IntWritable(charCount));
}
}
看起来不错,我从IDE生成可运行的jar文件,并按如下方式执行:
$ ./hadoop jar ~/Desktop/example_MapReduce.jar no.hib.mod250.hadoop.CharacterCountDriver
14/11/27 19:36:42 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/11/27 19:36:42 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/11/27 19:36:42 INFO input.FileInputFormat: Total input paths to process : 1
14/11/27 19:36:42 INFO mapreduce.JobSubmitter: number of splits:1
14/11/27 19:36:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local316715466_0001
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/staging/roberto316715466/.staging/job_local316715466_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/staging/roberto316715466/.staging/job_local316715466_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/local/localRunner/roberto/job_local316715466_0001/job_local316715466_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/local/localRunner/roberto/job_local316715466_0001/job_local316715466_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
14/11/27 19:36:43 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/11/27 19:36:43 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/11/27 19:36:43 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/11/27 19:36:43 INFO mapred.LocalJobRunner: Waiting for map tasks
14/11/27 19:36:43 INFO mapred.LocalJobRunner: Starting task: attempt_local316715466_0001_m_000000_0
14/11/27 19:36:43 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
14/11/27 19:36:43 INFO mapred.MapTask: Processing split: file:/tmp/filein:0+434
14/11/27 19:36:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
然后我想我的输出文件将在/ tmp / fileout中。但相反,它似乎是空的:
$ tree /tmp/fileout/
/tmp/fileout/
└── _temporary
└── 0
2 directories, 0 files
有什么我想念的吗?任何人都可以帮助我吗?
问候: - )
编辑:
我几乎找到了解决方案on this other post。
在CharacterCountDriver中,我用job.waitForCompletion(true)替换了job.submit()。我得到了更详细的输出:
/tmp/fileout/
├── part-r-00000
└── _SUCCESS
0 directories, 2 files
但我仍然不知道如何阅读,_SUCCESS是空的,而部分-r-0000不是我所期待的:
Absorbantandyellowandporousishe 1
AreyoureadykidsAyeAyeCaptain 1
ICanthearyouAYEAYECAPTAIN 1
Ifnauticalnonsensebesomethingyouwish 1
Ohh 1
READY 1
SPONGEBOBSQUAREPANTS 1
SpongebobSquarepants 3
Spongebobsquarepants 4
Thendroponthedeckandfloplikeafish 1
Wholivesinapineappleunderthesea 1
有什么建议吗?我的代码中可能有任何错误吗?感谢。
答案 0 :(得分:0)
<强>部分-R-00000 强> 是reducer输出文件的名称。如果你有更多的减速器,它们将被编号为-r-00001,依此类推。
答案 1 :(得分:0)
如果我理解正确,您希望程序计算输入文件中的字母数字字符。但是,这不是您的代码正在做的事情。您可以更改映射器以计算每行中的字母数字字符:
String strValue = value.toString();
strValue.replaceAll("[^a-zA-Z0-9]", "");
context.write(new Text("alphanumeric", strValue.length());
这应该可以修复你的程序。基本上,您的映射器输出每行中的字母数字字符作为键。 reducer累积每个键的计数。通过我的更改,您只使用一个键:“字母数字”。关键可能是其他东西,它仍然可以工作。