Kubernetes POD重新启动

时间:2020-01-18 08:50:49

标签: python docker kubernetes google-cloud-platform google-kubernetes-engine

我正在运行具有两个节点池的GKE集群。

第一个节点池:1个节点(无自动缩放)(4个vCPU,16 GB RAM)

第二个节点池:1个节点(自动扩展到2个节点)(1个vCPU,3.75 GB RAM)

此处:kubectl顶部节点

enter image description here

我们从在一个节点上运行Elasticsearch,Redis,RabbitMQ和所有微服务的单个节点开始了集群。我们无法在第一个节点池中添加更多节点,因为这将浪费资源。第一个节点可以满足所有资源需求。

我们正面临仅一种微服务重启POD的问题。

enter image description here

核心服务舱仅在重新启动。尝试describe pod时是ERROR 137 terminated

在GKE堆栈中,驱动器图MemoryCPU尚未达到极限。

集群利用率中的所有Pod

enter image description here

在集群日志中,我发现了此警告:

0/3 nodes are available: 3 Insufficient CPU. 

但是这里大约有3个节点,大约6个vCPU,这绰绰有余。

也是这个错误

Memory cgroup out of memory: Kill process 3383411 (python3) score 2046 or sacrifice child Killed process 3384902 (python3) total-vm:14356kB, anon-rss:5688kB, file-rss:4572kB, shmem-rss:0kB

编辑:1

Name:           test-core-7fc8bbcb4c-vrbtw
Namespace:      default
Priority:       0
Node:           gke-test-cluster-highmem-pool-gen2-f2743e02-msv2/10.128.0.7
Start Time:     Fri, 17 Jan 2020 19:59:54 +0530
Labels:         app=test-core
                pod-template-hash=7fc8bbcb4c
                tier=frontend
Annotations:    <none>
Status:         Running
IP:             10.40.0.41
IPs:            <none>
Controlled By:  ReplicaSet/test-core-7fc8bbcb4c
Containers:
  test-core:
    Container ID:   docker://0cc49c15ed852e99361590ee421a9193e10e7740b7373450174f549e9ba1d7b5
    Image:          gcr.io/test-production/core/production:fc30db4
    Image ID:       docker-pullable://gcr.io/test-production/core/production@sha256:b5dsd03b57sdfsa6035ff5ba9735984c3aa714bb4c9bb92f998ce0392ae31d055fe
    Ports:          9595/TCP, 443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Sun, 19 Jan 2020 14:54:52 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Sun, 19 Jan 2020 07:36:42 +0530
      Finished:     Sun, 19 Jan 2020 14:54:51 +0530
    Ready:          True
    Restart Count:  7
    Limits:
      cpu:     990m
      memory:  1Gi
    Requests:
      cpu:      200m
      memory:   128Mi
    Liveness:   http-get http://:9595/k8/liveness delay=25s timeout=5s period=5s #success=1 #failure=30
    Readiness:  http-get http://:9595/k8/readiness delay=25s timeout=8s period=5s #success=1 #failure=30
    Environment Variables from:
      test-secret             Secret     Optional: false
      core-staging-configmap  ConfigMap  Optional: false
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-hcz6d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hcz6d
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

请帮助。 谢谢你。

1 个答案:

答案 0 :(得分:3)

在pod中运行的应用程序可能消耗的内存超过了指定的限制。您可以将docker exec / kubectl exec放入容器并使用top监视应用程序。但是从管理整个集群的角度来看,我们使用cadvisor(属于Kubelet的一部分)+ Heapster来实现。但是现在Heapster已由kube-metric服务器(https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring)取代