我将Kubernetes集群设置为1个主节点和1个工作节点。出于测试目的,我将CPU利用率和内存利用率提高了100%,但Node的静态状态为“ NotReady”。 我正在测试压力状态..如何将MemoryPressure的状态标志更改为true或DiskPressure或PIDPressure更改为true
这是我的主节点条件:-
条件:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 27 Nov 2019 14:36:29 +0000 Wed, 27 Nov 2019 14:36:29 +0000 WeaveIsUp Weave pod has set this
MemoryPressure False Thu, 28 Nov 2019 07:36:46 +0000 Fri, 22 Nov 2019 13:30:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Nov 2019 07:36:46 +0000 Fri, 22 Nov 2019 13:30:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Nov 2019 07:36:46 +0000 Fri, 22 Nov 2019 13:30:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Nov 2019 07:36:46 +0000 Fri, 22 Nov 2019 13:30:48 +0000 KubeletReady kubelet is posting ready status
以下是广告连播信息:-
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5644d7b6d9-dm8v7 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 22d
kube-system coredns-5644d7b6d9-mz5rm 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 22d
kube-system etcd-ip-172-31-28-186.us-east-2.compute.internal 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d
kube-system kube-apiserver-ip-172-31-28-186.us-east-2.compute.internal 250m (12%) 0 (0%) 0 (0%) 0 (0%) 22d
kube-system kube-controller-manager-ip-172-31-28-186.us-east-2.compute.internal 200m (10%) 0 (0%) 0 (0%) 0 (0%) 22d
kube-system kube-proxy-cw8vv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d
kube-system kube-scheduler-ip-172-31-28-186.us-east-2.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 22d
kube-system weave-net-ct9zb 20m (1%) 0 (0%) 0 (0%) 0 (0%) 22d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 770m (38%) 0 (0%)
memory 140Mi (1%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
此处为工作节点:-
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 28 Nov 2019 07:00:08 +0000 Thu, 28 Nov 2019 07:00:08 +0000 WeaveIsUp Weave pod has set this
MemoryPressure False Thu, 28 Nov 2019 07:39:03 +0000 Thu, 28 Nov 2019 07:00:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Nov 2019 07:39:03 +0000 Thu, 28 Nov 2019 07:00:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Nov 2019 07:39:03 +0000 Thu, 28 Nov 2019 07:00:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Nov 2019 07:39:03 +0000 Thu, 28 Nov 2019 07:00:00 +0000 KubeletReady kubelet is posting ready status
答案 0 :(得分:2)
有几种方法可以使节点进入“未就绪”状态,但不能通过Pods。当Pod开始消耗过多内存时,Kubelet会杀死该Pod,以精确地保护节点。
我猜您想测试节点故障时会发生什么,在这种情况下,您需要消耗掉它。换句话说,要模拟节点问题,您应该执行以下操作:
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:color="@color/yellow900" android:state_selected="true" />
<item android:color="@color/yellow800" android:state_checked="true" />
<item android:color="@color/gray800" android:state_enabled="false" />
<item android:color="@color/yellow800" android:state_enabled="true" />
</selector>
仍然检查kubectl drain NODE
,以了解在什么情况下会发生什么情况。
编辑
实际上,我尝试访问该节点并直接在该节点上施加压力,这是20秒内发生的事情:
kubectl drain --help
在节点上检查:
root@gke-klusta-lemmy-3ce02acd-djhm:/# stress --cpu 16 --io 8 --vm 8 --vm-bytes 2G
我正在运行非常弱的节点。 1CPU @ 4GB内存
答案 1 :(得分:0)
只需在您希望处于未就绪状态的节点上关闭kubelet服务