错误:hadoop多集群设置,包含1个主节点,2个从节点

时间:2014-04-09 13:27:13

标签: hadoop

我在hadoop中配置了多个节点集群,其中包含1个主节点,2个从节点。在主节点启动hadoop,它正常工作。但是如果我们检查从节点中的运行服务,在slave 1中它显示datanode正在运行,但是slave 2 datanode没有运行。如果我们看到日志,它会显示以下错误。

2014-04-09 17:58:20,203 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.3.240:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020)**:DataXceiveServer:java.nio.channels.AsynchronousCloseException
    at** java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:172)
    at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:103)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:132)
    at java.lang.Thread.run(Thread.java:701)

2014-04-09 17:58:20,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting DataXceiveServer
2014-04-09 17:58:20,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2014-04-09 17:58:20,206 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread
2014-04-09 17:58:20,207 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2014-04-09 17:58:20,207 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down
2014-04-09 17:58:20,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.3.240:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}
2014-04-09 17:58:20,211 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1117)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:550)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:586)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:855)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1601)
    at java.lang.Thread.run(Thread.java:701)
2014-04-09 17:58:20,212 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2014-04-09 17:58:20,212 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2014-04-09 17:58:20,212 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2014-04-09 17:58:20,212 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-DS-973961736-127.0.1.1-50010-1395131735014
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-DS-973961736-127.0.1.1-50010-1395131735014
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1117)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:550)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2093)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:917)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1601)
    at java.lang.Thread.run(Thread.java:701)
2014-04-09 17:58:20,213 WARN org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
2014-04-09 17:58:20,214 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-04-09 17:58:20,216 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at sigma-VirtualBox/10.0.2.15
************************************************************/
2014-04-09 18:01:57,962 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = sigma-VirtualBox/10.0.2.15
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.6.0_30
************************************************************/
2014-04-09 18:01:58,223 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-04-09 18:01:58,255 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2014-04-09 18:01:58,260 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-04-09 18:01:58,260 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-04-09 18:01:58,533 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2014-04-09 18:02:04,452 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2014-04-09 18:02:04,475 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer server at 50010
2014-04-09 18:02:04,477 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-04-09 18:02:04,481 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2014-04-09 18:02:09,576 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-04-09 18:02:09,666 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-04-09 18:02:09,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-04-09 18:02:09,677 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2014-04-09 18:02:09,677 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2014-04-09 18:02:09,677 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-04-09 18:02:09,677 INFO org.mortbay.log: jetty-6.1.26
2014-04-09 18:02:10,286 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-04-09 18:02:10,290 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2014-04-09 18:02:10,290 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.
2014-04-09 18:02:15,329 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2014-04-09 18:02:15,330 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.
2014-04-09 18:02:15,331 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.
2014-04-09 18:02:15,332 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(sigma-VirtualBox:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020)
2014-04-09 18:02:15,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2014-04-09 18:02:15,367 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.3.240:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}
2014-04-09 18:02:15,385 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-04-09 18:02:15,386 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-04-09 18:02:15,388 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2014-04-09 18:02:15,390 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2014-04-09 18:02:15,391 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2014-04-09 18:02:15,395 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2014-04-09 18:02:15,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner
2014-04-09 18:02:15,416 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 55ms
2014-04-09 18:02:15,439 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 25 ms
2014-04-09 18:02:18,407 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 10.10.3.240:50010 is attempting to report storage ID DS-973961736-127.0.1.1-50010-1395131735014. Node 10.10.3.241:50010 is expected to serve this storage.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:5049)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:3939)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:1095)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy5.blockReport(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1084)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1588)
    at java.lang.Thread.run(Thread.java:701)

2014-04-09 18:02:18,414 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50075
2014-04-09 18:02:18,417 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2014-04-09 18:02:18,417 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting
2014-04-09 18:02:18,418 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting
2014-04-09 18:02:18,419 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting
2014-04-09 18:02:18,420 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020
2014-04-09 18:02:18,421 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2014-04-09 18:02:18,421 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2014-04-09 18:02:18,421 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.3.240:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020):DataXceiveServer:java.nio.channels.AsynchronousCloseException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:172)
    at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:103)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:132)
    at java.lang.Thread.run(Thread.java:701)

2014-04-09 18:02:18,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting DataXceiveServer
2014-04-09 18:02:18,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2014-04-09 18:02:18,422 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread
2014-04-09 18:02:18,423 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2014-04-09 18:02:18,423 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down
2014-04-09 18:02:18,424 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.3.240:50010, storageID=DS-973961736-127.0.1.1-50010-1395131735014, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}
2014-04-09 18:02:18,425 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1117)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:550)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:586)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:855)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1601)
    at java.lang.Thread.run(Thread.java:701)
2014-04-09 18:02:18,425 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2014-04-09 18:02:18,426 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2014-04-09 18:02:18,426 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2014-04-09 18:02:18,426 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-DS-973961736-127.0.1.1-50010-1395131735014
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-DS-973961736-127.0.1.1-50010-1395131735014
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1117)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:550)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2093)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:917)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1601)
    at java.lang.Thread.run(Thread.java:701)
2014-04-09 18:02:18,426 WARN org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
2014-04-09 18:02:18,427 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-04-09 18:02:18,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at sigma-VirtualBox/10.0.2.15
*******************************************************

Hadoop版本是1.2.1,路径: - / usr / local / hadoop

以下是hadoop配置文件,

> core-site.xml
> ================
> 
> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl"
> href="configuration.xsl"?>
> 
> <!-- Put site-specific property overrides in this file. -->
> 
> <configuration> <property>   <name>hadoop.tmp.dir</name>  
> <value>/app/hadoop/tmp</value>   <description>A base for other
> temporary directories.</description> </property>
> 
> <property>   <name>fs.default.name</name>  
> <value>hdfs://master:54310</value>   <description>The name of the
> default file system.  A URI whose   scheme and authority determine the
> FileSystem implementation.  The   uri's scheme determines the config
> property (fs.SCHEME.impl) naming   the FileSystem implementation
> class.  The uri's authority is used to   determine the host, port,
> etc. for a filesystem.</description> </property>
> 
>  <property>
>     <name>hadoop.proxyuser.hduser.hosts</name>
>     <value>*</value>   </property>   <property>
>     <name>hadoop.proxyuser.hduser.groups</name>
>     <value>*</value>   </property>
> 
> 
> </configuration>
> 
> 
> mapred-site.xml
> ==================
> 
> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl"
> href="configuration.xsl"?>
> 
> <!-- Put site-specific property overrides in this file. -->
> 
> <configuration> <property>   <name>mapred.job.tracker</name>  
> <value>master:54311</value>   <description>The host and port that the
> MapReduce job tracker runs   at.  If "local", then jobs are run
> in-process as a single map   and reduce task.   </description>
> </property> <property>   <name>mapred.jobtracker.plugins</name>  
> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>  
> <description>Comma-separated list of jobtracker plug-ins to be
> activated.   </description> </property>
> 
> </configuration>
> 
> hdfs-site.xml
> =============== <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> 
> <!-- Put site-specific property overrides in this file. -->
> 
> <configuration> <property>   <name>dfs.replication</name>  
> <value>2</value>   <description>Default block replication.   The
> actual number of replications can be specified when the file is
> created.   The default is used if replication is not specified in
> create time.   </description> </property> <property>  
> <name>dfs.webhdfs.enable</name>   <value>true</value> </property>
> 
> </configuration>

请帮帮我,如何解决这个错误。

提前致谢..

0 个答案:

没有答案