为什么某些kube-system Pod(例如kube-proxy)与它们所在的节点具有相同的Pod IP?

时间:2019-02-19 18:47:56

标签: kubernetes

我注意到我的一个集群中没有想到的东西,也找不到今天的解释。许多kube-system Pod的Pod IP与它们所在的节点相同。我想了解为什么会这样,但是我找不到关于此事的任何文档或其他地方的讨论。这是我看到的:

k get nodes -o wide
NAME                       STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-agentpool-14855512-0   Ready    agent   47m   v1.12.5   10.240.0.66   <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
aks-agentpool-14855512-1   Ready    agent   47m   v1.12.5   10.240.0.4    <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
aks-agentpool-14855512-2   Ready    agent   47m   v1.12.5   10.240.0.35   <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
 k get po -n kube-system -o wide | grep '10.240.0.4 '
azure-cni-networkmonitor-rqs8q       1/1     Running   0          48m   10.240.0.4    aks-agentpool-14855512-1   <none>
azure-ip-masq-agent-dj8w5            1/1     Running   0          48m   10.240.0.4    aks-agentpool-14855512-1   <none>
kube-proxy-jpjjc                     1/1     Running   0          48m   10.240.0.4    aks-agentpool-14855512-1   <none>
kube-svc-redirect-bfvlk              2/2     Running   0          48m   10.240.0.4    aks-agentpool-14855512-1   <none>

我的理解是,Pods的IP应该与其所在的节点不同。要公开Pod,应使用服务。但是,我认为情况并非如此:

k get svc -n kube-system
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
heapster               ClusterIP   10.0.0.57     <none>        80/TCP          55m
kube-dns               ClusterIP   10.0.0.10     <none>        53/UDP,53/TCP   55m
kubernetes-dashboard   ClusterIP   10.0.105.92   <none>        80/TCP          55m
metrics-server         ClusterIP   10.0.179.25   <none>        443/TCP         55m

起初,我认为此实现特定于AKS,但是,对于GKE来说,结果也是如此。

恐怕我可能错过了一个非常基本的概念,这使我无法更好地理解这一点。任何帮助将不胜感激。

更新:这是因为在Pod YAML中设置了hostNetworking: true

您可以通过运行以下内容进行观察:

k get po kube-proxy-jpjjc  -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    aks.microsoft.com/release-time: 'seconds:1550597164 nanos:675278758 '
  creationTimestamp: "2019-02-19T17:29:15Z"
  generateName: kube-proxy-
  labels:
    component: kube-proxy
    controller-revision-hash: 68c8cf5db6
    pod-template-generation: "1"
    tier: node
  name: kube-proxy-jpjjc
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: kube-proxy
    uid: 75df85c8-346b-11e9-a1db-667e55a73bba
  resourceVersion: "693"
  selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-jpjjc
  uid: e1004b3e-346b-11e9-a1db-667e55a73bba
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - aks-agentpool-14855512-1
  containers:
  - command:
    - /hyperkube
    - proxy
    - --kubeconfig=/var/lib/kubelet/kubeconfig
    - --cluster-cidr=10.240.0.0/16
    - --feature-gates=ExperimentalCriticalPodAnnotation=true
    env:
    - name: KUBERNETES_PORT_443_TCP_ADDR
      value: nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io
    - name: KUBERNETES_PORT
      value: tcp://nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io:443
    - name: KUBERNETES_PORT_443_TCP
      value: tcp://nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io:443
    - name: KUBERNETES_SERVICE_HOST
      value: nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io
    image: k8s.gcr.io/hyperkube-amd64:v1.12.5
    imagePullPolicy: IfNotPresent
    name: kube-proxy
    resources:
      requests:
        cpu: 100m
    securityContext:
      privileged: true
      procMount: Default
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/kubelet
      name: kubeconfig
      readOnly: true
    - mountPath: /etc/kubernetes/certs
      name: certificates
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-7m959
      readOnly: true
  dnsPolicy: ClusterFirst
  hostNetwork: true
  nodeName: aks-agentpool-14855512-1
  nodeSelector:
    beta.kubernetes.io/os: linux
  priority: 1000000
  priorityClassName: high-priority
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Equal
    value: "true"
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/network-unavailable
    operator: Exists
  volumes:
  - hostPath:
      path: /var/lib/kubelet
      type: ""
    name: kubeconfig
  - hostPath:
      path: /etc/kubernetes/certs
      type: ""
    name: certificates
  - name: default-token-7m959
    secret:
      defaultMode: 420
      secretName: default-token-7m959
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:18Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:29Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:29Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:15Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://8934a2ec756bf77ad34b352ab78f70f41c7a52f126e511b235378b65c708ff15
    image: k8s.gcr.io/hyperkube-amd64:v1.12.5
    imageID: docker-pullable://k8s.gcr.io/hyperkube-amd64@sha256:82add6703e6e28b50f2457b3a3e4eec573a2603437cb9df1af5670dd7e640e75
    lastState: {}
    name: kube-proxy
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-02-19T17:29:28Z"
  hostIP: 10.240.0.4
  phase: Running
  podIP: 10.240.0.4
  qosClass: Burstable
  startTime: "2019-02-19T17:29:18Z"

1 个答案:

答案 0 :(得分:0)

这是因为在Pod中设置了YAML hostNetworking: true

您可以通过运行以下内容进行观察:

k get po kube-proxy-jpjjc  -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    aks.microsoft.com/release-time: 'seconds:1550597164 nanos:675278758 '
  creationTimestamp: "2019-02-19T17:29:15Z"
  generateName: kube-proxy-
  labels:
    component: kube-proxy
    controller-revision-hash: 68c8cf5db6
    pod-template-generation: "1"
    tier: node
  name: kube-proxy-jpjjc
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: kube-proxy
    uid: 75df85c8-346b-11e9-a1db-667e55a73bba
  resourceVersion: "693"
  selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-jpjjc
  uid: e1004b3e-346b-11e9-a1db-667e55a73bba
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - aks-agentpool-14855512-1
  containers:
  - command:
    - /hyperkube
    - proxy
    - --kubeconfig=/var/lib/kubelet/kubeconfig
    - --cluster-cidr=10.240.0.0/16
    - --feature-gates=ExperimentalCriticalPodAnnotation=true
    env:
    - name: KUBERNETES_PORT_443_TCP_ADDR
      value: nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io
    - name: KUBERNETES_PORT
      value: tcp://nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io:443
    - name: KUBERNETES_PORT_443_TCP
      value: tcp://nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io:443
    - name: KUBERNETES_SERVICE_HOST
      value: nodeport-test-cni-87e6d01c.hcp.westus2.azmk8s.io
    image: k8s.gcr.io/hyperkube-amd64:v1.12.5
    imagePullPolicy: IfNotPresent
    name: kube-proxy
    resources:
      requests:
        cpu: 100m
    securityContext:
      privileged: true
      procMount: Default
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/kubelet
      name: kubeconfig
      readOnly: true
    - mountPath: /etc/kubernetes/certs
      name: certificates
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-7m959
      readOnly: true
  dnsPolicy: ClusterFirst
  hostNetwork: true
  nodeName: aks-agentpool-14855512-1
  nodeSelector:
    beta.kubernetes.io/os: linux
  priority: 1000000
  priorityClassName: high-priority
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Equal
    value: "true"
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/network-unavailable
    operator: Exists
  volumes:
  - hostPath:
      path: /var/lib/kubelet
      type: ""
    name: kubeconfig
  - hostPath:
      path: /etc/kubernetes/certs
      type: ""
    name: certificates
  - name: default-token-7m959
    secret:
      defaultMode: 420
      secretName: default-token-7m959
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:18Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:29Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:29Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-02-19T17:29:15Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://8934a2ec756bf77ad34b352ab78f70f41c7a52f126e511b235378b65c708ff15
    image: k8s.gcr.io/hyperkube-amd64:v1.12.5
    imageID: docker-pullable://k8s.gcr.io/hyperkube-amd64@sha256:82add6703e6e28b50f2457b3a3e4eec573a2603437cb9df1af5670dd7e640e75
    lastState: {}
    name: kube-proxy
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-02-19T17:29:28Z"
  hostIP: 10.240.0.4
  phase: Running
  podIP: 10.240.0.4
  qosClass: Burstable
  startTime: "2019-02-19T17:29:18Z"