我在一年前看到了类似的问题。链接在这里: see here
我有类似的配置,但面临同样的EOFException
错误。
Eclipse的Hadoop插件或我的Hadoop配置有什么问题吗? (注意:我已遵循标准配置;因此不存在错误;当我运行bin / start-all.sh时,单节点Hadoop集群运行正常)
下面是Eclipse连接到HDFS时的堆栈跟踪:
java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:455)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:442)
at org.apache.hadoop.eclipse.server.HadoopServer.getJobClient(HadoopServer.java:473)
at org.apache.hadoop.eclipse.server.HadoopServer$LocationStatusUpdater.run(HadoopServer.java:102)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Hadoop NameNode Log如下:
STARTUP_MSG: Starting NameNode STARTUP_MSG: host = somnath-laptop/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.4 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 ************************************************************/
2013-02-11 13:26:56,505 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-11 13:26:56,520 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-11 13:26:56,792 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-11 13:26:56,797 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-11 13:26:56,807 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-11 13:26:56,809 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-11 13:26:56,957 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-11 13:26:57,023 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-11 13:26:57,109 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7
2013-02-11 13:26:57,123 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 loaded in 0 seconds.
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-02-11 13:26:57,126 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,548 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 923 msecs
2013-02-11 13:26:57,830 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:26:57,838 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-02-11 13:26:57,845 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-02-11 13:26:57,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort54310 registered.
2013-02-11 13:26:57,871 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort54310 registered.
2013-02-11 13:26:57,873 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:54310
2013-02-11 13:26:57,879 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-02-11 13:27:03,041 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-02-11 13:27:03,142 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-11 13:27:03,156 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-02-11 13:27:03,164 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-02-11 13:27:03,165 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-02-11 13:27:03,166 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-02-11 13:27:03,166 INFO org.mortbay.log: jetty-6.1.26
2013-02-11 13:27:03,585 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2013-02-11 13:27:03,585 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2013-02-11 13:27:03,586 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
2013-02-11 13:27:03,589 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
2013-02-11 13:27:07,306 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:27:07,308 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51327: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:14,657 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-747527201-127.0.1.1-50010-1360274163059
2013-02-11 13:27:14,661 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 29 seconds.
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 1, processing time: 3 msecs
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,007 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,008 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,023 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at
org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5178) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:2338) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getListing(NameNode.java:831) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,024 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:17,327 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
2013-02-11 13:27:17,327 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51335: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:27,339 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
2013-02-11 13:27:27,339 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51336: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
任何快速帮助将不胜感激。
答案 0 :(得分:0)
我有同样的问题,我通过
解决了.jar
。hadoop jar <jar_name> main_class_name
中运行。