这是我的一个节点在分配方面的状态(基于请求)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 374m (4%) 3151m (39%)
memory 493Mi (1%) 1939Mi (7%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
尽管使用率较低,但我希望通过集群自动缩放器(启用)可以将其缩小。
但是不是。
这是正在运行的豆荚
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
extra-services external-dns-cfd4bb858-fvpfj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 149m
istio-system istio-galley-65987fccb-prxk6 10m (0%) 0 (0%) 0 (0%) 0 (0%) 121m
istio-system istio-policy-76ddd9fc97-pkxhh 110m (1%) 2 (25%) 128Mi (0%) 1Gi (3%) 149m
kube-system fluentd-gcp-v3.2.0-7mndl 100m (1%) 1 (12%) 200Mi (0%) 500Mi (1%) 5h20m
kube-system kube-proxy-gke-my-node-name 100m (1%) 0 (0%) 0 (0%) 0 (0%) 5h20m
kube-system metrics-server-v0.3.1-8675cc4d57-xg9qt 53m (0%) 148m (1%) 145Mi (0%) 395Mi (1%) 120m
kube-system prometheus-to-sd-n2jfq 1m (0%) 3m (0%) 20Mi (0%) 20Mi (0%) 5h20m
这是我的守护进程:
➢ k get ds --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system fluentd-gcp-v3.2.0 14 14 14 14 14 beta.kubernetes.io/fluentd-ds-ready=true 226d
kube-system metadata-proxy-v0.1 0 0 0 0 0 beta.kubernetes.io/metadata-proxy-ready=true 226d
kube-system nvidia-gpu-device-plugin 0 0 0 0 0 <none> 226d
kube-system prometheus-to-sd 14 14 14 14 14 beta.kubernetes.io/os=linux 159d
为什么节点没有按比例缩小?
编辑:这是尝试手动drain
节点时得到的:
cannot delete Pods with local storage (use --delete-local-data to override): istio-system/istio-policy-76ddd9fc97-pkxhh
答案 0 :(得分:1)
节点自动缩放是基于调度的,调度程序将尝试在节点上调度Pod,如果所有节点都不可用,它将按比例缩放群集并在新的Pod上进行调度,自动缩放器仅在没有新的Pod时才进行缩放在该节点上调度Pod,即在x个时间后从任何调度的Pod中调度它。您可以找到有关此here
的更多信息