我想使用HPROF来分析我的Hadoop作业。问题是我得到了TRACES
,但CPU SAMPLES
文件中没有profile.out
。我在run方法中使用的代码是:
/** Get configuration */
Configuration conf = getConf();
conf.set("textinputformat.record.delimiter","\n\n");
conf.setStrings("args", args);
/** JVM PROFILING */
conf.setBoolean("mapreduce.task.profile", true);
conf.set("mapreduce.task.profile.params", "-agentlib:hprof=cpu=samples," +
"heap=sites,depth=6,force=n,thread=y,verbose=n,file=%s");
conf.set("mapreduce.task.profile.maps", "0-2");
conf.set("mapreduce.task.profile.reduces", "");
/** Job configuration */
Job job = new Job(conf, "HadoopSearch");
job.setJarByClass(Search.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
/** Set Mapper and Reducer, use identity reducer*/
job.setMapperClass(Map.class);
job.setReducerClass(Reducer.class);
/** Set input and output formats */
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
/** Set input and output path */
FileInputFormat.addInputPath(job, new Path("/user/niko/16M"));
FileOutputFormat.setOutputPath(job, new Path(cmd.getOptionValue("output")));
job.waitForCompletion(true);
return 0;
如何在输出中写入CPU SAMPLES
?
我在stderr
上也有s trange错误消息,但我认为它没有关系,因为当分析设置为false或启用分析的代码被注释掉时它也存在。错误是
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
答案 0 :(得分:2)
纱线(或MRv1)在作业完成后就会杀死容器。 无法在分析文件中写入CPU Samples。实际上,你的踪迹也应该被截断。
您必须添加以下选项(或Hadoop版本上的等效选项):
yarn.nodemanager.sleep-delay-before-sigkill.ms = 30000
# No. of ms to wait between sending a SIGTERM and SIGKILL to a container
yarn.nodemanager.process-kill-wait.ms = 30000
# Max time to wait for a process to come up when trying to cleanup a container
mapreduce.tasktracker.tasks.sleeptimebeforesigkill = 30000
# Same en MRv1 ?
(30秒似乎够了)
答案 1 :(得分:0)
这可能是由https://issues.apache.org/jira/browse/MAPREDUCE-5465引起的,已在较新的Hadoop版本中修复。
所以解决方案似乎是: