我正在关注Tom White的'Hadoop - The Definitive Guide'。 当我尝试使用Java接口从hadoop URL读取数据时,我收到以下错误消息:
hadoop@ubuntu:/usr/local/hadoop$ hadoop URLCat hdfs://master/hdfs/data/SampleText.txt
12/11/21 13:46:32 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 0 time(s).
12/11/21 13:46:33 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 1 time(s).
12/11/21 13:46:34 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 2 time(s).
12/11/21 13:46:35 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 3 time(s).
12/11/21 13:46:36 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 4 time(s).
12/11/21 13:46:37 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 5 time(s).
URLCat文件的内容如下:
import java.net.URL;
import java.io.InputStream;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
public class URLCat {
static {
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception {
InputStream in = null;
try {
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}
/ etc / hosts文件内容为:
127.0.0.1 localhost
127.0.1.1 ubuntu.ubuntu-domain ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# /ect/hosts Master and slaves
192.168.9.55 master
192.168.9.56 slave1
192.168.9.57 slave2
192.168.9.58 slave3
答案 0 :(得分:1)
首先,我要检查Hadoop守护进程是否正在运行。一个方便的工具是jps。确保(至少)namenode和datanode正在运行。
如果仍然无法连接,请检查网址是否正确。正如您提供的 hdfs:// master / (没有任何端口号),Hadoop假定您的namenode侦听端口 8020 (默认值)。这是您在日志中看到的内容。
要在core-site.xml
( fs.default.name )中快速查找,您可以检查是否为文件系统URI定义了自定义端口(在本例中为54310)。