17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hado
op.mapreduce.lib.output.FileOutputCommitter
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Waiting for map tasks
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000000_0
17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@1106b4d9
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file02.txt:0+27
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 44; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000000_0
is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000000_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000000_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000001_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@16def9af
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file01.txt:0+21
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 38; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000001_0
is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000001_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000001_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map task executor complete.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Waiting for reduce tasks
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_r_000000_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@42a3c42
17/11/29 19:32:32 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apach
e.hadoop.mapreduce.task.reduce.Shuffle@afbba44
17/11/29 19:32:32 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=33433
8464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10,
memToMemMergeOutputsThreshold=10
17/11/29 19:32:32 INFO reduce.EventFetcher: attempt_local2072208822_0001_r_00000
0_0 Thread started: EventFetcher for fetching Map Completion Events
17/11/29 19:32:32 INFO mapred.LocalJobRunner: reduce task executor complete.
17/11/29 19:32:32 INFO mapreduce.Job: Job job_local2072208822_0001 running in ub
er mode : false
17/11/29 19:32:32 INFO mapreduce.Job: map 100% reduce 0%
17/11/29 19:32:32 WARN mapred.LocalJobRunner: job_local2072208822_0001
java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleErro
r: error in shuffle in localfetcher#1
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.j
ava:489)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:5
56)
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error i
n shuffle in localfetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(Lo
calJobRunner.java:346)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51
1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:748)
**Caused by: java.io.FileNotFoundException: D:/tmp/hadoop-***Semab%20Ali/mapred/local
/localRunner/Semab%20Ali/jobcache/job_local2072208822_0001/attempt_local20722088
22_0001_m_000001_0/output/file.out.index*****
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:
212)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtil
s.java:155)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:70)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(Lo
calFetcher.java:125)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetc
her.java:103)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher
.java:86)
**17/11/29 19:32:33 INFO mapreduce.Job: Job job_local2072208822_0001 failed with s
tate FAILED due to: NA**
17/11/29 19:32:33 INFO mapreduce.Job: Counters: 23
File System Counters
FILE: Number of bytes read=6947
FILE: Number of bytes written=658098
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=75
HDFS: Number of bytes written=0
HDFS: Number of read operations=12
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Map-Reduce Framework
Map input records=2
Map output records=8
Map output bytes=82
Map output materialized bytes=85
Input split bytes=216
Combine input records=8
Combine output records=6
Spilled Records=6
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
Total committed heap usage (bytes)=599261184
File Input Format Counters
Bytes Read=48
我是窗口用户。这是我的yarn-site.xml配置和另外一件事我只是在手动运行这个项目之前启动数据节点和名称节点而不是通过start-all.cmd命令是我必须启动的任何其他内容吗?只是我的想法像资源管理器等。
纱SITE.XML
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
答案 0 :(得分:0)
由于用户名中有空格,因此会在目录中显示%20,从而生成此错误。
为解决这个问题,我们执行以下步骤:
在此步骤旁边,可能会出现以下问题:There are 0 datanode(s) running and no node(s) are excluded in this operation
但是在Windows中,您可以通过以下方式解决此问题:
使用以下命令删除output_dir:
hdfs dfs -rm -r /path/to/directory
享受。