HDFS名称节点未在Kubernetes上正确显示数据节点列表

时间:2020-07-05 20:59:50

标签: kubernetes hdfs eks

我正在尝试在EKS群集上安装hdfs。我部署了一个名称节点和两个数据节点。一切都顺利完成。

但是发生一个奇怪的错误。当我检查Namenode GUI或检查dfsadmin客户端以获取datanodes列表时,它仅随机显示一个数据节点,即,有时datanode-0,有时datanode-1。它永远不会显示两个/所有数据节点。

这里可能是什么问题?我什至在为数据节点使用无头服务。请帮忙。

#clusterIP service of namenode
apiVersion: v1
kind: Service
metadata:
  name: hdfs-name
  namespace: pulse
  labels:
    app.kubernetes.io/name: hdfs-name
    app.kubernetes.io/version: "1.0"
spec:
  ports:
    - port: 8020
      protocol: TCP
      name: nn-rpc
    - port: 9870
      protocol: TCP
      name: nn-web
  selector:
    app.kubernetes.io/name: hdfs-name
    app.kubernetes.io/version: "1.0"
  type: ClusterIP
---
#namenode stateful deployment 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hdfs-name
  namespace: pulse
  labels:
    app.kubernetes.io/name: hdfs-name
    app.kubernetes.io/version: "1.0"
spec:
  serviceName: hdfs-name
  replicas: 1       #TODO 2 namenodes (1 active, 1 standby)
  selector:
    matchLabels:
      app.kubernetes.io/name: hdfs-name
      app.kubernetes.io/version: "1.0"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: hdfs-name
        app.kubernetes.io/version: "1.0"
    spec:
      initContainers:
      - name: delete-lost-found
        image: busybox
        command: ["sh", "-c", "rm -rf /hadoop/dfs/name/lost+found"]
        volumeMounts:
        - name: hdfs-name-pv-claim
          mountPath: /hadoop/dfs/name
      containers:
      - name: hdfs-name
        image: bde2020/hadoop-namenode
        env:
        - name: CLUSTER_NAME
          value: hdfs-k8s
        - name: HDFS_CONF_dfs_permissions_enabled
          value: "false"
        #- name: HDFS_CONF_dfs_replication              #not needed
        #  value: "2"  
        ports:
        - containerPort: 8020
          name: nn-rpc
        - containerPort: 9870
          name: nn-web
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
            cpu: "500m"
            memory: 1Gi
        volumeMounts:
        - name: hdfs-name-pv-claim
          mountPath: /hadoop/dfs/name
  volumeClaimTemplates:
  - metadata:
      name: hdfs-name-pv-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: ebs
      resources:
        requests:
          storage: 1Gi
---
#headless service of datanode
apiVersion: v1
kind: Service
metadata:
  name: hdfs-data
  namespace: pulse
  labels:
    app.kubernetes.io/name: hdfs-data
    app.kubernetes.io/version: "1.0"
spec:
  ports:
    ports:
    - port: 9866
      protocol: TCP
      name: dn-rpc
    - port: 9864
      protocol: TCP
      name: dn-web
  selector:
    app.kubernetes.io/name: hdfs-data
    app.kubernetes.io/version: "1.0"
  clusterIP: None
  type: ClusterIP
---
#datanode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hdfs-data
  namespace: pulse
  labels:
    app.kubernetes.io/name: hdfs-data
    app.kubernetes.io/version: "1.0"
spec:
  serviceName: hdfs-data
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: hdfs-data
      app.kubernetes.io/version: "1.0"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: hdfs-data
        app.kubernetes.io/version: "1.0"
    spec:
      containers:
      - name: hdfs-data
        image: bde2020/hadoop-datanode
        env:
        - name: CORE_CONF_fs_defaultFS
          value: hdfs://hdfs-name:8020
        ports:           
        - containerPort: 9866
          name: dn-rpc
        - containerPort: 9864
          name: dn-web
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
            cpu: "500m"
            memory: 1Gi
        volumeMounts:
        - name: hdfs-data-pv-claim
          mountPath: /hadoop/dfs/data 
  volumeClaimTemplates:
  - metadata:
      name: hdfs-data-pv-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: ebs
      resources:
        requests:
          storage: 1Gi

     

运行 hdfs dfsadmin -report 仅显示一个数据节点,例如有时datanode-0和有时datanode-1。
数据节点的主机名不同于datanode-0,datanode-1,但它们的名称相同(127.0.0.1:9866(localhost))。这可能是问题吗?如果是,该如何解决我?

此外,即使rep因子为3,我也看不到任何HDFS块复制发生。

更新
嗨,这是Istio porxy问题。我卸载了Istio并解决了。 Istio代理将名称设置为127.0.0.1,而不是实际IP。

2 个答案:

答案 0 :(得分:1)

我遇到了同样的问题,我当前使用的解决方法是通过将以下注释添加到hadoop namenode来禁用到9000端口(对于您的情况为8020)上的namenode的入站流量的特使重定向:

traffic.sidecar.istio.io/excludeInboundPorts: "9000"

参考:https://istio.io/v1.4/docs/reference/config/annotations/

在阅读了一些Istio问题之后,似乎在通过特使进行重定向时未保留源IP。

相关问题:
https://github.com/istio/istio/issues/5679
https://github.com/istio/istio/pull/23275

由于我目前尚未运行包含TPROXY源IP保存修复程序的Istio 1.6,因此我还没有尝试过TPROXY方法。

答案 1 :(得分:0)

这是Istio porxy问题。我卸载了Istio并解决了。 Istio代理将名称设置为127.0.0.1,而不是实际IP。