我有一个Mapreduce程序,当运行1%的数据集时,这是需要的时间:
Job Counters
Launched map tasks=3
Launched reduce tasks=45
Data-local map tasks=1
Rack-local map tasks=2
Total time spent by all maps in occupied slots (ms)=29338
Total time spent by all reduces in occupied slots (ms)=200225
Total time spent by all map tasks (ms)=29338
Total time spent by all reduce tasks (ms)=200225
Total vcore-seconds taken by all map tasks=29338
Total vcore-seconds taken by all reduce tasks=200225
Total megabyte-seconds taken by all map tasks=30042112
Total megabyte-seconds taken by all reduce tasks=205030400
如何推断知道分析100%数据的时间? 我的理由是它需要100倍以上,因为1%是一个块,但是当运行在100%时实际需要134倍。
100%数据的时间
Job Counters
Launched map tasks=2113
Launched reduce tasks=45
Data-local map tasks=1996
Rack-local map tasks=117
Total time spent by all maps in occupied slots (ms)=26800451
Total time spent by all reduces in occupied slots (ms)=3607607
Total time spent by all map tasks (ms)=26800451
Total time spent by all reduce tasks (ms)=3607607
Total vcore-seconds taken by all map tasks=26800451
Total vcore-seconds taken by all reduce tasks=3607607
Total megabyte-seconds taken by all map tasks=27443661824
Total megabyte-seconds taken by all reduce tasks=3694189568
答案 0 :(得分:2)
预测地图会降低基于其在一小部分数据上的性能的性能并不容易。如果您查看1%运行的日志,它将使用45个reducer。相同数量的减速器仍用于100%数据。这意味着减速器用于处理混洗和排序阶段的完整输出的时间量不是线性的。
开发了一些数学模型来预测地图降低性能。
这些研究报告中的一篇提供了更多关于地图降低性能的见解。
http://personal.denison.edu/~bressoud/graybressoudmcurcsm2012.pdf
希望这些信息有用。
答案 1 :(得分:0)
如前所述,预测MapReduce作业的运行时并非易事。 问题是作业的执行时间由最后一个并行任务的结束时间定义。任务的执行时间取决于它运行的硬件,并发工作负载,数据偏差等等......
杜克大学的Starfish project可能值得一看。它包括Hadoop作业的性能模型,可以调整作业配置,以及一些易于调试的可视化功能。