我见过多个与AM Container发布错误相关的问题,但没有一个问题解决了我。
我在Mac OSX High Sierra笔记本电脑上安装了Hadoop 2.7.5,并尝试了Pi的示例mapreduce工作:
pipeline {
agent any
environment {
def userId = "${env.UID}";
}
stages {
stage('Print UID') {
steps {
script {
echo "${userId}"
}
}
}
}
}
我正在运行所有服务:
hadoop jar /usr/local/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar pi 2 4
这是我得到的输出:
$ jps
69555 NameNode
69954 NodeManager
69750 SecondaryNameNode
70806 JobHistoryServer
69643 DataNode
71194 Jps
69866 ResourceManager
错误似乎在说:
$ hadoop jar /usr/local/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar pi 2 4
Number of Maps = 2
Samples per Map = 4
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/03/25 13:30:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
18/03/25 13:30:43 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/03/25 13:30:43 INFO input.FileInputFormat: Total input paths to process : 2
18/03/25 13:30:43 INFO mapreduce.JobSubmitter: number of splits:2
18/03/25 13:30:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521963635636_0004
18/03/25 13:30:44 INFO impl.YarnClientImpl: Submitted application application_1521963635636_0004
18/03/25 13:30:44 INFO mapreduce.Job: The url to track the job: http://AbdealiJK-Mac.local:8088/proxy/application_1521963635636_0004/
18/03/25 13:30:44 INFO mapreduce.Job: Running job: job_1521963635636_0004
18/03/25 13:30:51 INFO mapreduce.Job: Job job_1521963635636_0004 running in uber mode : false
18/03/25 13:30:51 INFO mapreduce.Job: map 0% reduce 0%
18/03/25 13:30:51 INFO mapreduce.Job: Job job_1521963635636_0004 failed with state FAILED due to: Application application_1521963635636_0004 failed 2 times due to AM Container for appattempt_1521963635636_0004_000002 exited with exitCode: -1
For more detailed output, check application tracking page:http://AbdealiJK-Mac.local:8088/cluster/app/application_1521963635636_0004Then, click on links to logs of each attempt.
Diagnostics: File /Users/abdealijk/hadoop/nm-local-dir/usercache/abdealijk/appcache/application_1521963635636_0004/container_1521963635636_0004_02_000001 does not exist
Failing this attempt. Failing the application.
18/03/25 13:30:51 INFO mapreduce.Job: Counters: 0
Job Finished in 7.986 seconds
java.io.FileNotFoundException: File does not exist: hdfs://localhost/user/abdealijk/QuasiMonteCarlo_1521964841970_1162968685/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1820)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1843)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:355)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
但是当我检查它时:
File /Users/abdealijk/hadoop/nm-local-dir/usercache/abdealijk/appcache/application_1521963635636_0004/container_1521963635636_0004_02_000001 does not exist
我有权限写,我拥有该文件夹等等。但是仍然没有在那里创建容器文件夹。
编辑1 :使用YARN-RM / yarn命令记录
我已经尝试检查YARN-RM webUI中的日志以及$ ls -lh ~/hadoop/nm-local-dir/usercache/abdealijk/appcache/application_1521963635636_0004
total 0
drwxr-xr-x 6 abdealijk staff 192B Mar 25 13:30 filecache
,但是他们都说没有启动AM容器,没有找到日志。
编辑2 :这是我在该文件夹中获得的内容
yarn logs -applicationId
没有容器的文件夹:(
编辑3 :我的core-site.xml包含:
$ tree ~/hadoop/nm-local-dir/usercache/abdealijk/appcache/application_1522077498598_0003
~/hadoop/nm-local-dir/usercache/abdealijk/appcache/application_1522077498598_0003
└── filecache
├── 10
│ └── job.splitmetainfo
├── 11
│ └── job.jar
│ └── job.jar
├── 12
│ └── job.split
└── 13
└── job.xml
6 directories, 4 files
我已尝试使用<configuration>
<property>
<name>fs.defaultFS</name>
<!-- <value>hdfs://localhost/</value> -->
<value>hdfs://localhost:8020/</value>
</property>
</configuration>
和hdfs://localhost/
。
我认为这可能是URI的问题
答案 0 :(得分:0)
以下是修正它的原因:
<强> HDFS-site.xml中:强>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>${user.home}/hadoop/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>${user.home}/hadoop/hdfs/namenode</value>
</property>
</configuration>
<强>芯-site.xml中强>:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-tmpdir</value>
</property>
</configuration>
执行:
$ rm -rf ~/hadoop # To delete my previous folders
$ mkdir -p ~/hadoop/hdfs/namenode
$ mkdir -p ~/hadoop/hdfs/datanode
$ hdfs namenode -format
现在执行命令允许容器成功启动。