我是hadoop的新手。我已经在我的mac系统中设置了hadoop,然后我试图运行以下内容:
hadoop jar wordcount.jar /usr/joy/input /usr/joy/output
响应该命令,在终端中打印了以下消息,
16/03/18 17:13:20 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable 16/03/18 17:13:20 INFO client.RMProxy: Connecting to
ResourceManager at localhost/127.0.0.1:8032
16/03/18 17:13:21 WARN mapreduce.JobResourceUploader: Hadoop command-
line option parsing not performed. Implement the Tool interface and
execute your application with ToolRunner to remedy this.
16/03/18 17:13:21 INFO input.FileInputFormat: Total input paths to
process : 1
16/03/18 17:13:21 INFO mapreduce.JobSubmitter: number of splits:1
16/03/18 17:13:21 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1458279089418_0002
16/03/18 17:13:21 INFO impl.YarnClientImpl: Submitted application
application_1458279089418_0002
16/03/18 17:13:21 INFO mapreduce.Job: The url to track the job:
http://EN-AbhishekM:8088/proxy/application_1458279089418_0002/
现在,当我在浏览器中检查作业的状态时,在日志中我发现以下错误:
Application application_1458279089418_0001 failed 2 times due to Error
launching appattempt_1458279089418_0001_000002. Got exception:
org.apache.hadoop.net.ConnectTimeoutException: Call From
EN-AbhishekM/192.168.0.102 to 192.168.43.66:61029
failed on socket timeout exception:
org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout
while waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending
remote=192.168.43.66/192.168.43.66:61029];....
我在这里粘贴配置文件:
芯-site.xml中
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
纱-site.xml中
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml中
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
HDFS-site.xml中
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
通过以下方式格式化文件系统:
bin/hdfs namenode -format
通过以下方式启动namenode和datanode守护程序:
sbin/start-dfs.sh
通过以下方式启动ResourceManager守护程序和NodeManager守护程序:
sbin/start-yarn.sh
可以请任何人建议我在这里做错了什么。