Hadoop:错误security.UserGroupInformation:MapReduce程序中的PriviledgedActionException

时间:2015-03-20 16:46:28

标签: java hadoop mapreduce

我正在尝试运行MapReduce作业。 当我执行以下命令来运行MapReduce作业时:

hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output

它给了我以下输出:

/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output Warning: $HADOOP_HOME is deprecated. 
15/03/20 22:03:42 ERROR security.UserGroupInformation: PriviledgedActionException as:suzon cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol 
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) 
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol 
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) 
at org.apache.hadoop.ipc.Client.call(Client.java:1107) 
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) 
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source) 
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411) at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:499) 
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:490) 
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:473) 
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapreduce.Job.connect(Job.java:511) 
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499) 
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) 
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) 
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

我的JPS:

*suzon@Suzon:/usr/local/hadoop$ jps 
14944 Jps <br>
14413 SecondaryNameNode 
14233 DataNode 
14076 NameNode*

这是我的core-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/app/hadoop/tmp</value> 
        <description>A base for other temporary directories.</description> 
    </property> 
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://localhost:54311</value> 
        <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> 
    </property> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>

我的mapred-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>

我的hdfs-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
        <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> 
    </property> 
</configuration>

0 个答案:

没有答案