Kubernetes没有在可用节点之间传播pod

时间:2017-12-29 04:53:18

标签: kubernetes google-cloud-platform google-kubernetes-engine

我有一个GKE集群,其中包含一个大小为2的单个节点池。当我添加第三个节点时,没有任何pod被分发到该第三个节点。

这是原始的2节点节点池:

$ kubectl get node
NAME                              STATUS    ROLES     AGE       VERSION
gke-cluster0-pool-d59e9506-b7nb   Ready     <none>    13m       v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t   Ready     <none>    18m       v1.8.3-gke.0

以下是在原始节点池上运行的pod:

$ kubectl get po -o wide --all-namespaces
NAMESPACE     NAME                                         READY     STATUS      RESTARTS   AGE       IP           NODE
default       attachment-proxy-659bdc84d-ckdq9             1/1       Running     0          10m       10.0.38.3    gke-cluster0-pool-d59e9506-vp6t
default       elasticsearch-0                              1/1       Running     0          4m        10.0.39.11   gke-cluster0-pool-d59e9506-b7nb
default       front-webapp-646bc49675-86jj6                1/1       Running     0          10m       10.0.38.10   gke-cluster0-pool-d59e9506-vp6t
default       kafka-0                                      1/1       Running     3          4m        10.0.39.9    gke-cluster0-pool-d59e9506-b7nb
default       mailgun-http-98f8d997c-hhfdc                 1/1       Running     0          4m        10.0.38.17   gke-cluster0-pool-d59e9506-vp6t
default       stamps-5b6fc489bc-6xtqz                      2/2       Running        3          10m       10.0.38.13   gke-cluster0-pool-d59e9506-vp6t
default       user-elasticsearch-6b6dd7fc8-b55xx           1/1       Running     0          10m       10.0.38.4    gke-cluster0-pool-d59e9506-vp6t
default       user-http-analytics-6bdd49bd98-p5pd5         1/1       Running     0          4m        10.0.39.8    gke-cluster0-pool-d59e9506-b7nb
default       user-http-graphql-67884c678c-7dcdq           1/1       Running     0          4m        10.0.39.7    gke-cluster0-pool-d59e9506-b7nb
default       user-service-5cbb8cfb4f-t6zhv                1/1       Running     0          4m        10.0.38.15   gke-cluster0-pool-d59e9506-vp6t
default       user-streams-0                               1/1       Running     0          4m        10.0.39.10   gke-cluster0-pool-d59e9506-b7nb
default       user-streams-elasticsearch-c64b64d6f-2nrtl   1/1       Running     3          10m       10.0.38.6    gke-cluster0-pool-d59e9506-vp6t
default       zookeeper-0                                  1/1       Running     0          4m        10.0.39.12   gke-cluster0-pool-d59e9506-b7nb
kube-lego     kube-lego-7799f6b457-skkrc                   1/1       Running     0          10m       10.0.38.5    gke-cluster0-pool-d59e9506-vp6t
kube-system   event-exporter-v0.1.7-7cb7c5d4bf-vr52v       2/2       Running     0          10m       10.0.38.7    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-648rh                     2/2       Running     0          14m       10.0.38.2    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-fqjz6                     2/2       Running     0          9m        10.0.39.2    gke-cluster0-pool-d59e9506-b7nb
kube-system   heapster-v1.4.3-6fc45b6cc4-8cl72             3/3       Running     0          4m        10.0.39.6    gke-cluster0-pool-d59e9506-b7nb
kube-system   k8s-snapshots-5699c68696-h8r75               1/1       Running     0          4m        10.0.38.16   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-778977457c-b48w5                    3/3       Running     0          4m        10.0.39.5    gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-dns-778977457c-sw672                    3/3       Running     0          10m       10.0.38.9    gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-autoscaler-7db47cb9b7-tjt4l         1/1       Running     0          10m       10.0.38.11   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-b7nb   1/1       Running     0          9m        10.128.0.4   gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-vp6t   1/1       Running     0          14m       10.128.0.2   gke-cluster0-pool-d59e9506-vp6t
kube-system   kubernetes-dashboard-76c679977c-mwqlv        1/1       Running     0          10m       10.0.38.8    gke-cluster0-pool-d59e9506-vp6t
kube-system   l7-default-backend-6497bcdb4d-wkx28          1/1       Running     0          10m       10.0.38.12   gke-cluster0-pool-d59e9506-vp6t
kube-system   nginx-ingress-controller-78d546664f-gf6mx    1/1       Running     0          4m        10.0.39.3    gke-cluster0-pool-d59e9506-b7nb
kube-system   tiller-deploy-5458cb4cc-26x26                1/1       Running     0          4m        10.0.39.4    gke-cluster0-pool-d59e9506-b7nb

然后我将另一个节点添加到节点池:

gcloud container clusters resize cluster0 --node-pool pool --size 3

第三个已添加并准备就绪:

NAME                              STATUS    ROLES     AGE       VERSION
gke-cluster0-pool-d59e9506-1rzm   Ready     <none>    3m        v1.8.3-gke.0
gke-cluster0-pool-d59e9506-b7nb   Ready     <none>    14m       v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t   Ready     <none>    19m       v1.8.3-gke.0

但是,除DaemonSet以外的任何广告连播都没有安排到添加的节点上:

$ kubectl get po -o wide --all-namespaces
NAMESPACE     NAME                                         READY     STATUS      RESTARTS   AGE       IP           NODE
default       attachment-proxy-659bdc84d-ckdq9             1/1       Running     0          17m       10.0.38.3    gke-cluster0-pool-d59e9506-vp6t
default       elasticsearch-0                              1/1       Running     0          10m       10.0.39.11   gke-cluster0-pool-d59e9506-b7nb
default       front-webapp-646bc49675-86jj6                1/1       Running     0          17m       10.0.38.10   gke-cluster0-pool-d59e9506-vp6t
default       kafka-0                                      1/1       Running     3          11m       10.0.39.9    gke-cluster0-pool-d59e9506-b7nb
default       mailgun-http-98f8d997c-hhfdc                 1/1       Running     0          10m       10.0.38.17   gke-cluster0-pool-d59e9506-vp6t
default       stamps-5b6fc489bc-6xtqz                      2/2       Running        3          16m       10.0.38.13   gke-cluster0-pool-d59e9506-vp6t
default       user-elasticsearch-6b6dd7fc8-b55xx           1/1       Running     0          17m       10.0.38.4    gke-cluster0-pool-d59e9506-vp6t
default       user-http-analytics-6bdd49bd98-p5pd5         1/1       Running     0          10m       10.0.39.8    gke-cluster0-pool-d59e9506-b7nb
default       user-http-graphql-67884c678c-7dcdq           1/1       Running     0          10m       10.0.39.7    gke-cluster0-pool-d59e9506-b7nb
default       user-service-5cbb8cfb4f-t6zhv                1/1       Running     0          10m       10.0.38.15   gke-cluster0-pool-d59e9506-vp6t
default       user-streams-0                               1/1       Running     0          10m       10.0.39.10   gke-cluster0-pool-d59e9506-b7nb
default       user-streams-elasticsearch-c64b64d6f-2nrtl   1/1       Running     3          17m       10.0.38.6    gke-cluster0-pool-d59e9506-vp6t
default       zookeeper-0                                  1/1       Running     0          10m       10.0.39.12   gke-cluster0-pool-d59e9506-b7nb
kube-lego     kube-lego-7799f6b457-skkrc                   1/1       Running     0          17m       10.0.38.5    gke-cluster0-pool-d59e9506-vp6t
kube-system   event-exporter-v0.1.7-7cb7c5d4bf-vr52v       2/2       Running     0          17m       10.0.38.7    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-648rh                     2/2       Running     0          20m       10.0.38.2    gke-cluster0-pool-d59e9506-vp6t
kube-system   fluentd-gcp-v2.0.9-8tb4n                     2/2       Running     0          4m        10.0.40.2    gke-cluster0-pool-d59e9506-1rzm
kube-system   fluentd-gcp-v2.0.9-fqjz6                     2/2       Running     0          15m       10.0.39.2    gke-cluster0-pool-d59e9506-b7nb
kube-system   heapster-v1.4.3-6fc45b6cc4-8cl72             3/3       Running     0          11m       10.0.39.6    gke-cluster0-pool-d59e9506-b7nb
kube-system   k8s-snapshots-5699c68696-h8r75               1/1       Running     0          10m       10.0.38.16   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-778977457c-b48w5                    3/3       Running     0          11m       10.0.39.5    gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-dns-778977457c-sw672                    3/3       Running     0          17m       10.0.38.9    gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-dns-autoscaler-7db47cb9b7-tjt4l         1/1       Running     0          17m       10.0.38.11   gke-cluster0-pool-d59e9506-vp6t
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-1rzm   1/1       Running     0          4m        10.128.0.3   gke-cluster0-pool-d59e9506-1rzm
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-b7nb   1/1       Running     0          15m       10.128.0.4   gke-cluster0-pool-d59e9506-b7nb
kube-system   kube-proxy-gke-cluster0-pool-d59e9506-vp6t   1/1       Running     0          20m       10.128.0.2   gke-cluster0-pool-d59e9506-vp6t
kube-system   kubernetes-dashboard-76c679977c-mwqlv        1/1       Running     0          17m       10.0.38.8    gke-cluster0-pool-d59e9506-vp6t
kube-system   l7-default-backend-6497bcdb4d-wkx28          1/1       Running     0          17m       10.0.38.12   gke-cluster0-pool-d59e9506-vp6t
kube-system   nginx-ingress-controller-78d546664f-gf6mx    1/1       Running     0          11m       10.0.39.3    gke-cluster0-pool-d59e9506-b7nb
kube-system   tiller-deploy-5458cb4cc-26x26                1/1       Running     0          11m       10.0.39.4    gke-cluster0-pool-d59e9506-b7nb

发生了什么事?为什么pod没有扩散到添加的节点上?我原以为pod会被分发到第三个节点。如何将工作负载扩展到第三个节点?

从技术上讲,就清单资源请求而言,我的整个应用程序适合一个节点。但是,当添加第二个节点时,应用程序将分发到第二个节点。所以我认为当我添加第三个节点时,pod也会被安排到该节点上。但这不是我所看到的。只有DaemonSet被安排到第三个节点上。我尝试过增长和缩小节点池无济于事。

更新

两个可抢占的节点重新启动,现在所有的pod都在一个节点上。发生了什么事?增加资源请求是使它们分散的唯一方法吗?

2 个答案:

答案 0 :(得分:4)

这是预期的行为。新的pod将被安排到空节点上,但运行的pod不会自动移动。 kubernetes调度程序通常对重新安排pod有保守,所以它没有理由不做。 Pod可以是有状态的(如db),因此kubernetes不想杀死并重新安排pod。

正在开发的项目可以满足您的需求: https://github.com/kubernetes-incubator/descheduler 我还没有使用它,但它正在从社区中积极发展。

答案 1 :(得分:0)

我在这里是一个完整的n00b,我正在学习Docker / Kubernetes,在阅读了你的问题后,听起来好像你有法定人数的问题。您是否尝试过最多5个节点? (n / 2 + 1)Kubernetes和Docker Swarmkit都使用Raft一致性算法。您可能还想查看Raft。如果确实符合您的困境,此视频可能会对您有所帮助。它谈到了Raft和Quorum。 https://youtu.be/Qsv-q8WbIZY?t=2m58s