读取超时的Httpfs HDFS

时间:2019-04-20 11:16:35

标签: scala apache-spark kubernetes hdfs

我已经使用Kubernetes中的httpfs设置来设置了对HDFS的访问权限,因为我需要访问HDFS数据节点,而不仅需要访问名称节点上的元数据。 我可以使用带有telnet的节点端口服务连接到HDFS,但是,当我尝试从HDFS中获取一些信息时-读取文件,检查文件是否存在,会出现错误:

[info]   java.net.SocketTimeoutException: Read timed out
[info]   at java.net.SocketInputStream.socketRead0(Native Method)
[info]   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
[info]   at java.net.SocketInputStream.read(SocketInputStream.java:171)
[info]   at java.net.SocketInputStream.read(SocketInputStream.java:141)
[info]   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
[info]   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
[info]   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
[info]   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
[info]   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
[info]   at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)

该错误的原因可能是什么? 这是用于建立与HDFS文件系统的连接并检查文件是否存在的源代码:

val url = "webhdfs://192.168.99.100:31400"
val fs = FileSystem.get(new java.net.URI(url), new org.apache.hadoop.conf.Configuration())
val check = fs.exists(new Path(dirPath))

由dirPath参数指定的目录位于HDFS上。

HDFS Kubernetes设置如下:

apiVersion: v1
kind: Service
metadata:
  name: namenode
spec:
  type: NodePort
  ports:
    - name: client
      port: 8020
    - name: hdfs
      port: 50070
      nodePort: 30070
    - name: httpfs
      port: 14000
      nodePort: 31400
  selector:
    hdfs: namenode
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: namenode
spec:
  replicas: 1
  template:
    metadata:
      labels:
        hdfs: namenode
    spec:
      containers:
        - env:
            - name: CLUSTER_NAME
              value: test
          image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
          name: namenode
          args:
            - "/run.sh &"
            - "/opt/hadoop-2.7.4/sbin/httpfs.sh start"
          envFrom:
            - configMapRef:
                name: hive-env
          ports:
            - containerPort: 50070
            - containerPort: 8020
            - containerPort: 14000
          volumeMounts:
            - mountPath: /hadoop/dfs/name
              name: namenode
      volumes:
        - name: namenode
          emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: datanode
spec:
  ports:
    - name: hdfs
      port: 50075
      targetPort: 50075
  selector:
    hdfs: datanode
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: datanode
spec:
  replicas: 1
  template:
    metadata:
      labels:
        hdfs: datanode
    spec:
      containers:
        - env:
            - name: SERVICE_PRECONDITION
              value: namenode:50070
          image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
          envFrom:
            - configMapRef:
                name: hive-env
          name: datanode
          ports:
            - containerPort: 50075
          volumeMounts:
            - mountPath: /hadoop/dfs/data
              name: datanode
      volumes:
        - name: datanode
          emptyDir: {}

UPD:Ping返回此类结果(192.168.99.100-minikube ip,31400-服务节点端口):

ping 192.168.99.100  -M do -s 28
PING 192.168.99.100 (192.168.99.100) 28(56) bytes of data.
36 bytes from 192.168.99.100: icmp_seq=1 ttl=64 time=0.845 ms
36 bytes from 192.168.99.100: icmp_seq=2 ttl=64 time=0.612 ms
36 bytes from 192.168.99.100: icmp_seq=3 ttl=64 time=0.347 ms
36 bytes from 192.168.99.100: icmp_seq=4 ttl=64 time=0.287 ms
36 bytes from 192.168.99.100: icmp_seq=5 ttl=64 time=0.547 ms
36 bytes from 192.168.99.100: icmp_seq=6 ttl=64 time=0.357 ms
36 bytes from 192.168.99.100: icmp_seq=7 ttl=64 time=0.544 ms
36 bytes from 192.168.99.100: icmp_seq=8 ttl=64 time=0.702 ms
36 bytes from 192.168.99.100: icmp_seq=9 ttl=64 time=0.307 ms
36 bytes from 192.168.99.100: icmp_seq=10 ttl=64 time=0.346 ms
36 bytes from 192.168.99.100: icmp_seq=11 ttl=64 time=0.294 ms
36 bytes from 192.168.99.100: icmp_seq=12 ttl=64 time=0.319 ms
36 bytes from 192.168.99.100: icmp_seq=13 ttl=64 time=0.521 ms
^C
--- 192.168.99.100 ping statistics ---
13 packets transmitted, 13 received, 0% packet loss, time 12270ms
rtt min/avg/max/mdev = 0.287/0.463/0.845/0.173 ms

对于主机和端口:

ping 192.168.99.100 31400 -M do -s 28
PING 31400 (0.0.122.168) 28(96) bytes of data.
^C
--- 31400 ping statistics ---
27 packets transmitted, 0 received, 100% packet loss, time 26603ms

1 个答案:

答案 0 :(得分:1)

我的同事发现问题出在minikube中的docker。在Kubernetes上设置HDFS之前运行它可以解决问题:

minikube ssh echo "sudo ip link set docker0 promisc on"