org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:无法选择远程机架(位置=〜/ default / rack

时间:2018-12-08 22:42:42

标签: hadoop

当我尝试在hdfs中写入文件时,我面临以下错误。

它是4节点群集。网络拓扑表明没有节点可供选择。但是所有数据节点都在运行,也发生了阻止报告。我是否缺少某些配置。

2018-12-09 10:32:00,048 DEBUG org.apache.hadoop.net.NetworkTopology: No node to choose.
2018-12-09 10:32:00,049 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to choose from local rack (location = /home/hdfs/rack-138); the second replica is not found, retry choosing ramdomly

下面是我在Scala中的代码行。

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

object Hdfs {
  def write(uri: String, filePath: String, data: Array[Byte]): Unit = {
    System.setProperty("HADOOP_USER_NAME", "hdfs")
    val path = new Path(filePath)
    val conf = new Configuration()
    conf.set("fs.defaultFS", uri)
    val fs = FileSystem.get(conf)
    val os = fs.create(path)
    os.write(data)

    fs.close()
    os.close()
  }
  def main(args: Array[String]) {
    //val conf = ConfigFactory.load()
    // write(conf.getString("hdfs.uri"), conf.getString("hdfs.result_path"), "Hello World".getBytes)
    write("hdfs://hadoop-master:8020","test.txt","Hello World".getBytes)
  }
}

================ core-site.xml ===========

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>         
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-master:8020</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://hadoop-master:8020</value>
  </property>
    <property>
    <name>topology.script.file.name</name>
    <value>/opt/hadoop/etc/hadoop/rack-topology.sh</value>
    </property>
</configuration>

=======================================

[hdfs@hadoop-master hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 3107945754624 (2.83 TB)
Present Capacity: 2892706406400 (2.63 TB)
DFS Remaining: 2892705902592 (2.63 TB)
DFS Used: 503808 (492 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 172.19.0.4:50010 (datanode3.hadoopspark_hadoop)
Hostname: hadoop-slave3
Rack: /default/rack
Decommission Status : Normal
Configured Capacity: 1035981918208 (964.83 GB)
DFS Used: 167936 (164 KB)
Non DFS Used: 71746449408 (66.82 GB)
DFS Remaining: 964235300864 (898.01 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.07%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Dec 08 22:40:57 UTC 2018


Name: 172.19.0.2:50010 (datanode1.hadoopspark_hadoop)
Hostname: hadoop-slave1
Rack: /default/rack
Decommission Status : Normal
Configured Capacity: 1035981918208 (964.83 GB)
DFS Used: 167936 (164 KB)
Non DFS Used: 71746449408 (66.82 GB)
DFS Remaining: 964235300864 (898.01 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.07%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Dec 08 22:40:57 UTC 2018


Name: 172.19.0.3:50010 (datanode2.hadoopspark_hadoop)
Hostname: hadoop-slave2
Rack: /default/rack
Decommission Status : Normal
Configured Capacity: 1035981918208 (964.83 GB)
DFS Used: 167936 (164 KB)
Non DFS Used: 71746449408 (66.82 GB)
DFS Remaining: 964235300864 (898.01 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.07%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Dec 08 22:40:57 UTC 2018


[hdfs@hadoop-master hadoop]$ 

0 个答案:

没有答案