Hadoop无法找到或加载主类

时间:2016-12-26 20:53:28

标签: hadoop linuxmint

我尝试从这个视频安装hadoop
https://www.youtube.com/watch?v=CtOhsZ0Sb1E&t=126s
当我运行最后一个命令

start-all.sh  

我收到了这条消息:

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh  
Starting namenodes on [localhost]  
localhost: namenode running as process 6283. Stop it first.  
localhost: starting datanode, logging to /home/myname/hadoop-    2.7.3/logs/hadoop-myname-datanode-MYNAME.out  
Starting secondary namenodes [0.0.0.0]  
0.0.0.0: secondarynamenode running as process 6379. Stop it first.  
starting yarn daemons  
starting resourcemanager, logging to /home/myname/hadoop-    2.7.3/logs/yarn-myname-resourcemanager-MYNAME.out  
Error: Could not find or load main class     org.apache.hadoop.yarn.server.resourcemanager.ResourceManager  
localhost: starting nodemanager, logging to /home/myname/hadoop- 2.7.3/logs/yarn-myname-nodemanager-MYNAME.out  
localhost: Error: Could not find or load main class  org.apache.hadoop.yarn.server.nodemanager.NodeManager  

我的bashrc文件

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_INSTALL=/home/myname/hadoop-2.7.3
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"

我的hdfs-site.xml

<configuration>  
 <property>  
  <name>dfs.replication</name>  
  <value>1</value>  
  <description>Default block replication.  
  The actual number of replications can be specified when the file is created.   
  The default is used if replication is not specified in create time.  
  </description>  
 </property>  
 <property>  
   <name>dfs.namenode.name.dir</name>  
  <value>file:/home/myname/hadoop-2.7.3/etc/hadoop/hadoop_store/hdfs/namenode</value>  
 </property>  
 <property>  
    <name>dfs.datanode.data.dir</name>  
   <value>file:/home/myname/hadoop-2.7.3/etc/hadoop/hadoop_store/hdfs/datanode</value>  
 </property>  
</configuration>  

my core-site.xml

<configuration>  
 <property>  
  <name>hadoop.tmp.dir</name>  
  <value>/home/myname/hadoop-2.7.3/tmp</value>  
  <description>A base for other temporary directories.</description>  
 </property>  

 <property>  
  <name>fs.default.name</name>  
  <value>hdfs://localhost:54310</value>  
  <description>The name of the default file system.  A URI whose  
  scheme and authority determine the FileSystem implementation.  The  
  uri's scheme determines the config property (fs.SCHEME.impl) naming  
  the FileSystem implementation class.  The uri's authority is used to  
  determine the host, port, etc. for a filesystem.</description>    
 </property>  
</configuration>  

我的mapred-site.xml

<configuration>  
 <property>  
  <name>mapred.job.tracker</name>  
  <value>localhost:54311</value>  
  <description>The host and port that the MapReduce job tracker runs  
  at.  If "local", then jobs are run in-process as a single map  
  and reduce task.  
  </description>  
 </property>  
</configuration>  

我尝试过很多东西,但错误仍然存​​在..
有什么想法吗?

2 个答案:

答案 0 :(得分:0)

将以下行添加到.bashrc文件中:

export HADOOP_PREFIX=/path_to_hadoop_location

答案 1 :(得分:0)

配置hadoop时必须包含yarn-site.xml文件

)

mapred-site.xml:也添加

 <configuration>
            <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
            </property>
        </configuration>

我认为您可以通过添加这些属性来解决此问题。