Hadoop - 无法找到或加载主类org.apache.hadoop.hdfs.qjournal.server.JournalNode

时间:2017-04-23 20:43:52

标签: hadoop hdfs hadoop2

尝试运行日记节点时失败。以下错误:

./hadoop-daemon.sh start journalnode

Error: Could not find or load main class org.apache.hadoop.hdfs.qjournal.server.JournalNode

它可以是什么?

这是我的 core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hdfscluster</value>
    </property>
    <property>
        <name>io.native.lib.available</name>
        <value>True</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>65536</value>
    </property>
    <property>
        <name>fs.trash.interval</name>
        <value>60</value>
    </property>
</configuration>

这里是 hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///srv/node/d1/hdfs/nn,file:///srv/node/d2/hdfs/nn,file:///srv/node/d3/hdfs/nn</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///srv/node/d1/hdfs/dn,file:///srv/node/d2/hdfs/dn,file:///srv/node/d3/hdfs/dn</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file:///srv/node/d1/hdfs/snn,file:///srv/node/d2/hdfs/snn,file:///srv/node/d3/hdfs/snn</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.nameservices</name>
        <value>hdfscluster</value>
    </property>

    <property>
        <name>dfs.ha.namenodes.hdfscluster</name>
        <value>nn1,nn2</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.hdfscluster.nn1</name>
        <value>192.168.57.101:8020</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.hdfscluster.nn1</name>
        <value>192.168.57.101:50070</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.hdfscluster.nn2</name>
        <value>192.168.57.102:8020</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.hdfscluster.nn2</name>
        <value>192.168.57.102:50070</value>
    </property>

    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/srv/node/d1/hdfs/journal</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://192.168.57.101:8485;192.168.57.102:8485;192.168.57.103:8485/hdfscluster</value>
    </property>

    <property>
        <name>dfs.client.failover.proxy.provider.hdfscluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>ha.zookeeper.quorum</name>
        <value>192.168.57.101:2181,192.168.57.102:2181,192.168.57.103:2181</value>
    </property>

    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>

    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hdfs/.ssh/id_dsa</value>
    </property>

    <property>
        <name>dfs.hosts</name>
        <value>/etc/hadoop/conf/dfs.hosts</value>
    </property>

    <property>
        <name>dfs.hosts.exclude</name>
        <value>/etc/hadoop/conf/dfs.hosts.exclude</value>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.permission</name>
        <value>False</value>
    </property>
    <property>
        <name>dfs.durable.sync</name>
        <value>True</value>
    </property>
    <property>
        <name>dfs.datanode.synconclose</name>
        <value>True</value>
    </property>
</configuration>

节点IP为192.168.57.1​​03
它必须运行journalnode和datanode

我正在使用Hadoop 2.8.0。是配置问题还是我错过了什么?

1 个答案:

答案 0 :(得分:0)

我不知道为什么但错过了/usr/lib/hadoop/share/hadoop/目录。我从头开始重新安装了hadoop,现在它可以工作了。