我目前正在尝试弄清楚当你运行MapReduce作业时,通过在代码的某些位置制作一些system.out.println()会发生什么,但是当作业运行时,知道那些print语句会在我的终端上打印。有人可以帮我弄清楚我到底做错了什么。
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.OutputCommitter;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.StatusReporter;
import org.apache.hadoop.mapreduce.TaskAttemptID;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCountJob {
public static int iterations;
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
System.out.println("blalblbfbbfbbbgghghghghghgh");
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
String myWord = itr.nextToken();
int n = 0;
while(n< 5){
myWord = myWord+ "Test my appending words";
n++;
}
System.out.println("Print my word: "+myWord);
word.set(myWord);
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
TaskAttemptID taskid = new TaskAttemptID();
TokenizerMapper my = new TokenizerMapper();
if (args.length != 3) {
System.err.println("Usage: WordCountJob <in> <out> <iterations>");
System.exit(2);
}
iterations = new Integer(args[2]);
Path inPath = new Path(args[0]);
Path outPath = null;
for (int i = 0; i<iterations; ++i){
System.out.println("Iteration number: "+i);
outPath = new Path(args[1]+i);
Job job = new Job(conf, "WordCountJob");
job.setJarByClass(WordCountJob.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, inPath);
FileOutputFormat.setOutputPath(job, outPath);
job.waitForCompletion(true);
inPath = outPath;
}
}
}
答案 0 :(得分:20)
这取决于您提交工作的方式,我认为您是使用bin/hadoop jar yourJar.jar
提交的吗?
您的System.out.println()
仅在您的main方法中可用,这是因为mapper / reducer在不同JVM中的hadoop内执行,所有输出都被重定向到特殊日志文件(out / log-files)。
我建议使用你自己的Apache-commons日志:
Log log = LogFactory.getLog(YOUR_MAPPER_CLASS.class)
因此请做一些信息记录:
log.info("Your message");
如果您处于“本地”模式,那么您可以在shell中看到此日志,否则此日志将存储在执行任务的计算机上的某个位置。请使用jobtracker的web UI查看这些日志文件,非常方便。默认情况下,作业跟踪器在端口50030上运行。
答案 1 :(得分:1)
或者,您可以使用MultipleOutputs类并将所有日志数据重定向到一个输出文件(日志)。
MultipleOutputs<Text, Text> mos = new MultipleOutputs<Text, Text>(context);
Text tKey = new Text("key");
Text tVal = new Text("log message");
mos.write(tKey, tVal, <lOG_FILE>);