Hadoop多节点群集设置

时间:2016-01-24 12:36:23

标签: java apache hadoop

我正在尝试在hadoop中设置多节点集群我如何获得0个数据节点作为活动数据节点,而我的hdfs显示0字节的分配

但是nodemanager守护进程正在datanodes上运行

主人: masterhost1 172.31.100.3(也充当辅助名称节点)namenode

datahost1 172.31.100.4 #datanode

datanode的日志如下:

  

`STARTUP_MSG:build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc865b490b9a6260e9611a5b8633cab885b3d247;由jenkins'编译在2015-12-18T01:19Z   STARTUP_MSG:java = 1.8.0_71   ************************************************** ********** /   2016-01-24 03:53:28,368 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:[TERM,HUP,INT]的已注册UNIX信号处理程序   2016-01-24 03:53:28,862 WARN org.apache.hadoop.hdfs.server.common.Util:Path / usr / local / hadoop_tmp / hdfs / datanode应指定为配置文件中的URI。请更新hdfs配置。   2016-01-24 03:53:36,454 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:hadoop-metrics2.properties中加载的属性   2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的快照周期为10秒。   2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:DataNode指标系统已启动   2016-01-24 03:53:37,132 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:配置的主机名是datahost1   2016-01-24 03:53:37,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:使用maxLockedMemory = 0启动DataNode   2016-01-24 03:53:37,195 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50010打开流媒体服务器   2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:平衡带宽为1048576字节/秒   2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:用于平衡的数字线程为5   2016-01-24 03:53:47,331 INFO org.mortbay.log:通过org.mortbay.log.Slf4jLog登录org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)   2016-01-24 03:53:47,375 INFO org.apache.hadoop.http.HttpRequestLog:未定义http.requests.datanode的Http请求日志   2016-01-24 03:53:47,395 INFO org.apache.hadoop.http.HttpServer2:增加了全局过滤器' safety' (类= org.apache.hadoop.http.HttpServer2 $ QuotingInputFilter)   2016-01-24 03:53:47,400 INFO org.apache.hadoop.http.HttpServer2:向上下文data​​node添加了过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)   2016-01-24 03:53:47,404 INFO org.apache.hadoop.http.HttpServer2:在上下文日志中添加了过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)   2016-01-24 03:53:47,405 INFO org.apache.hadoop.http.HttpServer2:在上下文静态中添加了过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)   2016-01-24 03:53:47,559 INFO org.apache.hadoop.http.HttpServer2:addJerseyResourcePackage:packageName = org.apache.hadoop.hdfs.server.datanode.web.resources; org.apache.hadoop.hdfs.web .resources,pathSpec = / webhdfs / v1 / *   2016-01-24 03:53:47,566 INFO org.apache.hadoop.http.HttpServer2:Jetty绑定到端口50075   2016-01-24 03:53:47,566 INFO org.mortbay.log:jetty-6.1.26   2016-01-24 03:53:48,565 INFO org.mortbay.log:已启动HttpServer2 $SelectChannelConnectorWithSafeStartup@0.0.0.0:50075   2016-01-24 03:53:49,200 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:dnUserName = hadoop   2016-01-24 03:53:49,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:supergroup = sudo   2016-01-24 03:53:59,319 INFO org.apache.hadoop.ipc.CallQueueManager:使用callQueue类java.util.concurrent.LinkedBlockingQueue   2016-01-24 03:53:59,354 INFO org.apache.hadoop.ipc.Server:为端口50020启动套接字读取器#1   2016-01-24 03:53:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50020打开IPC服务器   2016-01-24 03:53:59,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:为nameservices收到刷新请求:null   2016-01-24 03:53:59,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:为名称服务启动BPOfferServices:   2016-01-24 03:53:59,491 WARN org.apache.hadoop.hdfs.server.common.Util:Path / usr / local / hadoop_tmp / hdfs / datanode应指定为配置文件中的URI。请更新hdfs配置。   2016-01-24 03:53:59,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:对masterhost1 / 172.31.100.3:9000开始提供服务的块池(Datanode Uuid未分配)服务   2016-01-24 03:53:59,503 INFO org.apache.hadoop.ipc.Server:IPC服务器响应程序:启动   2016-01-24 03:53:59,504 INFO org.apache.hadoop.ipc.Server:50020上的IPC服务器监听器:启动   2016-01-24 03:54:00,805 INFO org.apache.hadoop.ipc.Client:重试连接服务器:masterhost1 / 172.31.100.3:9000。已经尝试了0次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS)   2016-01-24 03:54:01,808 INFO org.apache.hadoop.ipc.Client:重试连接服务器:masterhost1 / 172.31.100.3:9000。已经尝试了1次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS)   2016-01-24 03:54:02,811 INFO org.apache.hadoop.ipc.Client:重试连接服务器:masterhost1 / 172.31.100.3:9000。已经尝试了2次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS)   2016-01-24 03:54:03,826 INFO org.apache.hadoop.ipc.Client:重试连接服务器:masterhost1 / 172.31.100.3:9000。已经尝试了3次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS)   2016-01-24 03:54:04,831 INFO org.apache.hadoop.ipc.Client:重试连接服务器:masterhost1 / 172.31.100.3:9000。已经尝试了4次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS)

`

1 个答案:

答案 0 :(得分:0)

问题是传入连接namenode没有从datanode获取传入信息因为ipv6问题只是在主节点上禁用ipv6并使用netstat检查侦听端口然后你可以解决上面