Kubernetes - 什么是kube-system pod并且删除它们是否安全?

时间:2016-04-10 19:31:11

标签: kubernetes

我目前在GCloud上运行了一个集群,我用3个节点创建了集群。 这是我运行kubectl describe nodes

时得到的结果
Name:           node1
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (3 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
default         my-pod1                                 100m (10%)  0 (0%)      0 (0%)      0 (0%)
default         my-pod2                             100m (10%)  0 (0%)      0 (0%)      0 (0%)
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
300m (30%)  100m (10%)  200Mi (5%)  200Mi (5%)

Name:           node2
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (4 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
default         my-pod3                             100m (10%)  0 (0%)      0 (0%)      0 (0%)
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
kube-system         heapster-v11-yi2nw                          100m (10%)  100m (10%)  236Mi (6%)  236Mi (6%)
kube-system         kube-ui-v4-5nh36                            100m (10%)  100m (10%)  50Mi (1%)   50Mi (1%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
400m (40%)  300m (30%)  486Mi (13%) 486Mi (13%)

Name:           node3
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (3 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
kube-system         kube-dns-v9-bo86j                           310m (31%)  310m (31%)  170Mi (4%)  170Mi (4%)
kube-system         l7-lb-controller-v0.5.2-ae0t2                       110m (11%)  110m (11%)  70Mi (1%)   120Mi (3%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
520m (52%)  520m (52%)  440Mi (11%) 490Mi (13%)

现在,正如您所看到的,我有3个pod,node1上有2个,node2上有1个。我想要做的是移动node1上的所有pod并删除其他两个节点。但是,似乎有属于kube-system命名空间的pod,我不知道删除它们可能有什么影响。

我可以说,名为fluentd-cloud-logging...heapster..的广告连播用于记录和使用计算机资源,但我真的不知道我是否可以移动广告连播kube-dns-v9-bo86j和{ {1}}到另一个没有影响的节点。

任何人都可以帮助我了解如何继续?

非常感谢。

2 个答案:

答案 0 :(得分:2)

杀死他们以便他们将被重新安排在另一个节点上完全没问题。除了流利的pod之外,它们都可以被重新安排,这些pod被绑定到每个节点。

答案 1 :(得分:1)

如果要缩小群集的大小,可以删除三个节点中的两个,让系统重新安排删除节点时丢失的任何pod。如果剩余节点上没有足够的空间,您可能会看到该窗格永久待定。将kube-system pod挂起是不理想的,因为它们中的每一个都执行"系统功能"对于您的群集(例如DNS,监控等)而没有它们运行您的群集将无法完全正常运行。

如果您不需要使用gcloud container clusters update命令来执行其功能,也可以禁用某些系统窗格。