我部署了6个容器并在AWS EKS上运行。但是,运行一段时间后,日志显示错误,显示“磁盘空间不足2个节点”。我试图删除容器并重建。某些错误不断发生。有人有解决办法吗?
kubectl delete pod $image_name –namespace=xxx
kubectl describe pod $name --namespace=xxx
kubectl describe pod $image_name --namespace=xxX
Name: image_name
Namespace: xxx
Node: <none>
Labels: app=label
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicationController/label
Containers:
label-container:
Image: image_name
Port: 8084/TCP
Host Port: 0/TCP
Environment:
SPRING_PROFILES_ACTIVE: uatsilver
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kv27l (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-kv27l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kv27l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10s (x7 over 41s) default-scheduler 0/3 nodes are available: 1 Insufficient pods, 2 node(s) were not ready, 2 node(s) were out of disk space.
答案 0 :(得分:0)
Kubernetes无法调度您的Pod,因为节点的磁盘空间不足。正如Rafaf在评论中建议的那样,您应该增加节点的磁盘空间:删除容器并重新启动它们不会解决托管/运行这些容器的节点上的磁盘空间限制。
如果您使用标准/默认CloudFormation模板from the documentation创建工作节点,则只需增加NodeVolumeSize
参数:默认情况下,每个节点20 GiB EBS。您可以根据需要将其放大。
此外,您还要仔细检查节点上实际吃了那么多磁盘的东西!通常,日志轮换得很好,如果您自己(不是通过Pod)写数据,则不应遇到类似情况。