Kubernetes:编织在其中一个工作节点上选择了公共IP

时间:2019-12-10 20:28:51

标签: docker kubernetes kubernetes-pod weave

我有一个2个主服务器和2个工人kubernetes集群。每个节点都有范围为192.168.5.X的专用IP和公用IP。 创建编织守护程序集之后,编织窗格在一个节点上选择了正确的内部IP,但在另一节点上它选择了公共IP。有什么方法可以指示编织Pod在节点上选择私有IP?

我是通过在本地笔记本电脑上的Virtual Box上创建的VM上手动完成所有操作来从头开始创建群集。我指的是以下链接

https://github.com/mmumshad/kubernetes-the-hard-way

在工作节点上部署编织容器之后,在一个工作节点上的编织容器使用NAT ip,如下所示。

10.0.2.15是NAT IP,而192.168.5.12是内部IP

kubectl get pods -n kube-system -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
weave-net-p4czj   2/2     Running   2          26h   192.168.5.12   worker1   <none>           <none>
weave-net-pbb86   2/2     Running   8          25h   10.0.2.15      worker2   <none>           <none>
[@master1 ~]$ kubectl describe node
Name:               worker1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=worker1
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 10 Dec 2019 02:07:09 -0500
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 11 Dec 2019 04:50:15 -0500   Wed, 11 Dec 2019 04:50:15 -0500   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 02:09:09 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 02:09:09 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 02:09:09 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 04:16:26 -0500   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.5.12
  Hostname:    worker1
Capacity:
 cpu:                1
 ephemeral-storage:  14078Mi
 hugepages-2Mi:      0
 memory:             499552Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  13285667614
 hugepages-2Mi:      0
 memory:             397152Ki
 pods:               110
System Info:
 Machine ID:                 455146bc2c2f478a859bf39ac2641d79
 System UUID:                D4C6F432-3C7F-4D27-A21B-D78A0D732FB6
 Boot ID:                    25160713-e53e-4a9f-b1f5-eec018996161
 Kernel Version:             4.4.206-1.el7.elrepo.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.3
 Kubelet Version:            v1.13.0
 Kube-Proxy Version:         v1.13.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                   ------------  ----------  ---------------  -------------  ---
  default                    ng1-6677cd8f9-hws8n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26h
  kube-system                weave-net-p4czj        20m (2%)      0 (0%)      0 (0%)           0 (0%)         26h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                20m (2%)  0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:              <none>


Name:               worker2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=worker2
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 10 Dec 2019 03:14:01 -0500
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 11 Dec 2019 04:50:32 -0500   Wed, 11 Dec 2019 04:50:32 -0500   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 03:14:03 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 03:14:03 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 03:14:03 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Wed, 11 Dec 2019 07:13:43 -0500   Tue, 10 Dec 2019 03:56:47 -0500   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.15
  Hostname:    worker2
Capacity:
 cpu:                1
 ephemeral-storage:  14078Mi
 hugepages-2Mi:      0
 memory:             499552Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  13285667614
 hugepages-2Mi:      0
 memory:             397152Ki
 pods:               110
System Info:
 Machine ID:                 455146bc2c2f478a859bf39ac2641d79
 System UUID:                68F543D7-EDBF-4AF6-8354-A99D96D994EF
 Boot ID:                    5775abf1-97dc-411f-a5a0-67f51cc8daf3
 Kernel Version:             4.4.206-1.el7.elrepo.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.3
 Kubelet Version:            v1.13.0
 Kube-Proxy Version:         v1.13.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                    ------------  ----------  ---------------  -------------  ---
  default                    ng2-569d45c6b5-ppkwg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26h
  kube-system                weave-net-pbb86         20m (2%)      0 (0%)      0 (0%)           0 (0%)         26h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                20m (2%)  0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:              <none>

1 个答案:

答案 0 :(得分:0)

我可以看到您不仅在pod中而且在节点中都有不同的IP。

正如您在kubectl describe node输出中看到的,InternalIP的{​​{1}}是worker1192.168.5.12的是worker2

这不是预期的行为,因此确保将两个VirtualBox VM都附加到相同的适配器类型很重要。

两者都应该在同一个网络中,并且在注释中您确认是这种情况,并且可以解释此现象。

以下是该配置的示例:

VirtualBox Adapter Settings

正如您在评论中提到的那样,第一个节点是手动添加的,第二个节点是在TLS引导期间添加的,即使IP地址“错误”也添加了它。

要解决此问题,您最好的办法是使用Virtual Box上所有节点的相同适配器设置再次从头引导群集。