无法将文件放入HDFS

时间:2018-04-24 10:22:42

标签: hadoop hdfs

我已经建立了一个2节点的hadoop集群。我已经配置了hadoop配置文件。当我在我的主机上启动dfs和yarn时,它启动namenode(master)和datanode(在slave上)。但是当我在master上运行' jps '时,它不会显示正在运行的datanode(slave)。然而,datanode实际上正在运行,因为我可以看到它在从机上运行时调用' jps '。我已经在主站和从站(Slave可以从主站访问)之间正确地建立了SSH通信。 当我在master上运行' jps '时,它会显示

13218 SecondaryNameNode
14327 Jps
12984 NameNode
13388 ResourceManager

当我在奴隶上运行' jps '时,它会显示

10313 DataNode
10828 Jps
10479 NodeManager

当我运行 ./ hadoop / bin / hadoop fs -ls / 时,显示

Found 1 items
drwxr-xr-x   - user supergroup          0 2018-04-24 16:15 /new_directory

但当我运行某些内容将文件放在hdfs上时,如 ./ hadoop / bin / hadoop fs -put'/home/user/Desktop/words.txt'/ newdirectory ,它表明,

18/04/24 16:17:39 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /new_directory/words.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
  
  ...

以下是配置文件。 的芯-site.xml中

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:9000</value>
        </property>
    </configuration>

HD​​FS-site.xml中

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
            <name>dfs.namenode.name.dir</name>
            <value>/home/user/hadoop66_data/data/nameNode</value>
    </property>

    <property>
            <name>dfs.datanode.data.dir</name>
            <value>/home/user/hadoop66_data/data/dataNode</value>
    </property>

    <property>
            <name>dfs.replication</name>
            <value>1</value>
    </property>
</configuration>

mapred-site.xml中

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>

    <property>
            <name>yarn.app.mapreduce.am.resource.mb</name>
            <value>512</value>
    </property>

    <property>
            <name>mapreduce.map.memory.mb</name>
            <value>256</value>
    </property>

    <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>256</value>
    </property>
</configuration>

纱-site.xml中

<?xml version="1.0"?>

<configuration>
    <property>
            <name>yarn.acl.enable</name>
            <value>0</value>
    </property>

    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>master</value>
    </property>

    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
    
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1536</value>
    </property>

    <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value>1536</value>
    </property>

    <property>
            <name>yarn.scheduler.minimum-allocation-mb</name>
            <value>128</value>
    </property>

    <property>
            <name>yarn.nodemanager.vmem-check-enabled</name>
            <value>false</value>
    </property>
</configuration>

我的 / etc / hosts 文件如下所示:

127.0.0.1   localhost
127.0.1.1   BL
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.34.3 master
192.168.32.4 node1

hadoop-2.8.1 / etc / hadoop / masters(在主人身上)

master

hadoop-2.8.1 / etc / hadoop / slaves(在主人身上)

node1

0 个答案:

没有答案