我有一个4节点集群,总共96GB内存。
我已将输入分为100个文件,并将作业设置为100个Mapper。从日志开始,Mappers似乎按顺序运行。
[2014/10/08 15:22:36] INFO: Total input paths to process : 100
[2014/10/08 15:22:36] INFO: number of splits:100
[2014/10/08 15:22:36] INFO: Starting task: attempt_local1244628585_0001_m_000000_0
[2014/10/08 15:22:36] INFO: Submitting tokens for job: job_local1244628585_0001
[2014/10/08 15:22:36] INFO: Processing split: hdfs://.../input/in10:0+2
[2014/10/08 15:22:38] INFO: Task:attempt_local1244628585_0001_m_000000_0 is done. And is in the process of committing
[2014/10/08 15:22:38] INFO: Task attempt_local1244628585_0001_m_000000_0 is allowed to commit now
[2014/10/08 15:22:38] INFO: Saved output of task 'attempt_local1244628585_0001_m_000000_0' to hdfs://.../output/_temporary/0/task_local1244628585_0001_m_000000
[2014/10/08 15:22:38] INFO: hdfs://.../input/in10:0+2
[2014/10/08 15:22:38] INFO: Task 'attempt_local1244628585_0001_m_000000_0' done.
[2014/10/08 15:22:38] INFO: Finishing task: attempt_local1244628585_0001_m_000000_0
[2014/10/08 15:22:38] INFO: Starting task: attempt_local1244628585_0001_m_000001_0
...
不断。 基本上,它在开始另一个任务之前完成一项任务。
答案 0 :(得分:1)
您正在以本地模式运行:
[2014/10/08 15:22:36] INFO: Starting task: attempt_**local**1244628585_0001_m_000000_0
根据您的Hadoop版本,您需要配置JobTracker地址或ResourceManager地址。