hadoop运行应用程序 - 错误security.UserGroupInformation:PriviledgedActionException

时间:2013-10-02 20:12:41

标签: java eclipse hadoop

我已经在eclipse中编写了hadoop的WordCount代码作为java应用程序来测试运行应用程序的hadoop,但是当我尝试将其作为hdfs用户运行时,会出现此错误:

./hadoop jar /home/masi/eclipse_workspace/WordCount_apacheSample/bin/test2.jar WordCountApacheSample /user/hdfs/wordCountInput /user/hdfs/wordCountOutput
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/10/02 17:14:50 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
Exception in thread "main" java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:780)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:727)
    at org.apache.hadoop.ipc.Client.call(Client.java:1239)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:630)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1559)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:811)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1345)
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:140)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:418)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:333)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
    at WordCountApacheSample.main(WordCountApacheSample.java:71)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:508)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:253)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1288)
    at org.apache.hadoop.ipc.Client.call(Client.java:1206)
    ... 29 more

虽然我用hdfs:// localhost:9000 /测试了输入和输出路径,但没有区别! 顺便说一下,我研究了许多与我的问题相关的帖子,但它们没用;

任何帮助表示赞赏。 感谢。

2 个答案:

答案 0 :(得分:1)

最后我自己解决了这个问题,并决定告诉这里的原因,以帮助其他人:)原因听起来有点傻但问题是这样的:hadoop守护进程停止了!我的VM突然关闭,重新启动VM后,我忘了再次启动守护进程(datanode,namenode,...)!所以这个问题的原因是:datanode和namenode以及其他守护进程没有运行。

答案 1 :(得分:0)

如果您发现您的hdfs已损坏,那么您可以执行以下操作:

sudo -su hdfs
hadoop  fsck /
hadoop  dfsadmin -safemode leave

... - 然后删除损坏的文件(如果有) - 使用以下内容:

      hadoop  fs -rmr -skipTrash <folder with your files>
      hadoop  fsck -files delete /

检查状态:

hadoop  fsck /
此后

状态应为HEALTHY - 然后手动重启Ambari中的所有内容

我在一个小集群上试过这个,并在遇到类似的错误后设法让它恢复正常运行