我的配置如下:
我在Hadoop实验中使用了两台机器,分别是pc720( 10.10.1.1 )和pc719( 10.10.1.2 )。
jdk(版本1.8.0_181)由apt-get安装。从https://archive.apache.org/dist/hadoop/common/hadoop-2.7.1/下载 Hadoop2.7.1 ,并放入 / opt /
第一步:
我配置了/etc/bash.bashrc,添加了
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=${JAVA_HOME}/bin:${PATH}
export HADOOP_HOME=/opt/hadoop-2.7.1
export PATH=${HADOOP_HOME}/bin:${PATH}
export PATH=${HADOOP_HOME}/sbin:${PATH}
然后运行“ source /etc/profile
”
第2步:
我配置了xmls:
从站:
10.10.1.2
Core-site.xml
<property>
<name>fs.defautFS</name>
<value>hdfs://10.10.1.1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/hadoop_store/tmp</value>
</property>
Hdfs-site.xml
<property>
<name>dfs:replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>10.10.1.1:9001</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>10.10.1.1:8080</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
Mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>10.10.1.1:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.10.1.1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>10.10.1.1:19888</value>
</property>
<property>
<name>mapred.acls.enabled</name>
<value>true</value>
</property>
Yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>10.10.1.1</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8182</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
Step3:
在/ root /中,建立了几个目录:
mkdir hadoop_store
mkdir hadoop_store/hdfs
mkdir hadoop_store/tmp
mkdir hadoop_store/hdfs/datanode
mkdir hadoop_store/hdfs/namenode
然后我变成/opt/Hadoop-2.7.1/bin,运行
./hdfs namenode –format
cd ..
cd sbin/
./start-all.sh
./mr-jobhistory-daemon.sh start historyserver
运行jps
后,pc720显示
pc719显示
到这里,我认为我的hadoop2.7.1已成功安装和配置。但是问题来了。
我上交了/opt/hadoop-2.7.1/share/hadoop/mapreduce/显示的内容
然后我跑了hadoop jar hadoop-mapreduce-examples-2.7.1.jar pi 2 2
日志在
下Number of Maps = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
18/11/02 08:14:48 INFO client.RMProxy: Connecting to ResourceManager at /10.10.1.1:8032
18/11/02 08:14:48 INFO input.FileInputFormat: Total input paths to process : 2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: number of splits:2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541144755485_0002
18/11/02 08:14:49 INFO impl.YarnClientImpl: Submitted application application_1541144755485_0002
18/11/02 08:14:49 INFO mapreduce.Job: The url to track the job: http://node-0-link-0:8088/proxy/application_1541144755485_0002/
18/11/02 08:14:49 INFO mapreduce.Job: Running job: job_1541144755485_0002
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 running in uber mode : false
18/11/02 08:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 failed with state FAILED due to: Application application_1541144755485_0002 failed 2 times due to AM Container for appattempt_1541144755485_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://node-0-link-0:8088/cluster/app/application_1541144755485_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application.
18/11/02 08:14:53 INFO mapreduce.Job: Counters: 0
Job Finished in 5.08 seconds
java.io.FileNotFoundException: File file:/opt/hadoop-2.7.1/share/hadoop/mapreduce/QuasiMonteCarlo_1541168087724_1532373667/out/reduce-out does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1752)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
我尝试了许多解决方案,但似乎没有用。这个问题使我困惑了大约一个星期。我的配置有任何错误吗?我能做什么?请帮帮我。
谢谢!
答案 0 :(得分:0)
我相信您错过了/
的末尾fs.defaultFS
就是<value>hdfs://10.10.1.1:9000/</value>
请注意,/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo
由通常保存在HDFS中的yarn.app.mapreduce.am.staging-dir
配置。这给了我们提示是HDFS问题。
来源:https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cdh_ig_hdfs_cluster_deploy.html
答案 1 :(得分:0)
通过运行hadoop dfs -ls /tmp
,它会显示
Found 17 items
drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.ICE-unix
drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.Test-unix
drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.X11-unix
drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.XIM-unix
drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.font-unix
drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08
drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_10_10_1_1_8088_cluster____.kglkoh
drwxr-xr-x - root root 4096 2018-11-04 23:46 /tmp/Jetty_node.0.link.0_19888_jobhistory____.ckp60y
drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_node.0.link.0_9001_secondary____.h648yx
drwxr-xr-x - root root 4096 2018-11-02 01:13 /tmp/hadoop-root
-rw-r--r-- 1 root root 6 2018-11-04 23:45 /tmp/hadoop-root-namenode.pid
-rw-r--r-- 1 root root 6 2018-11-04 23:45 /tmp/hadoop-root-secondarynamenode.pid
drwxr-xr-x - root root 4096 2018-11-02 01:14 /tmp/hadoop-yarn
drwxr-xr-x - root root 4096 2018-11-05 00:02 /tmp/hsperfdata_root
-rw-r--r-- 1 root root 6 2018-11-04 23:46 /tmp/mapred-root-historyserver.pid
-rw-r--r-- 1 root root 388 2018-11-01 22:33 /tmp/ntp.conf.new
-rw-r--r-- 1 root root 6