Kubelet进程长时间具有高CPU使用率

时间:2017-05-19 07:21:08

标签: kubernetes kubelet

我有kubernetes集群,编织CNI插件由3个节点组成:

  • 1个主节点(虚拟机)
  • 2个工作者裸机节点(4核xeon,超线程 - 8个逻辑节点)

问题是top表明kubelet在第一个工作程序上的CPU使用率为60-100%。 在journalctl -u kubelet我看到很多消息(每分钟数百个)

May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075243    3843 docker_sandbox.go:205] Failed to stop sandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640": Error response from daemon: {"message":"No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075360    3843 remote_runtime.go:109] StopPodSandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-p6kwb_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075380    3843 kuberuntime_gc.go:138] Failed to stop sandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-p6kwb_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076549    3843 docker_sandbox.go:205] Failed to stop sandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf": Error response from daemon: {"message":"No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076654    3843 remote_runtime.go:109] StopPodSandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-6g8jq_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076676    3843 kuberuntime_gc.go:138] Failed to stop sandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-6g8jq_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.079585    3843 docker_sandbox.go:205] Failed to stop sandbox "014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772": Error response from daemon: {"message":"No such container: 014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.079805    3843 remote_runtime.go:109] StopPodSandbox "014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-r30cw_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772

在创建错误的cronetes任务之后发生了。我删除了--force的所有广告连播,但是广告小组仍尝试删除它们。此外,我重新启动了该工作人员的kubelet,没有结果。我如何与kubelet交谈以忘记他们?

版本信息

Kubernetes v1.6.1
Docker version 1.12.0, build 8eab29e
Linux kube-worker1 4.4.0-72-generic #93-Ubuntu SMP

容器清单(没有元数据)

  job:
    apiVersion: batch/v1
    kind: Job
    spec:
      template:
        spec:
          containers:
          - name: cron-task
            image: docker.company.ru/image:v2.3.2
            command: ["rake", "db:refresh_views"]
            env:
            - name: RAILS_ENV
              value: namespace
            - name: CONFIG_PATH
              value: /config
            volumeMounts:
            - name: config
              mountPath: /config
          volumes:
          - name: config
            configMap:
              name: task-conf
          restartPolicy: Never

此外,我没有在群集等中找到任何关于此pod的名称(2533948c46c1)的部分。

3 个答案:

答案 0 :(得分:1)

最后我找到了解决方案 Kubelet存储有关在

上运行的所有pod的信息
/var/lib/dockershim/sandbox

所以当我在该文件夹中ls时,我找到了所有丢失的pod的文件。然后我删除了这些文件,日志消息消失了,CPU使用率恢复到正常值(即使没有kubelet重启)

答案 1 :(得分:0)

这似乎与Kubernetes 1.6.x中的Pods with hostNetwork=true cannot be removed (and generate errors) when using CNI问题有关。这些消息无论如何都不重要,当然,当你试图找到实际问题时,这很烦人。 尝试使用最新版本的Kubernetes来缓解这些问题。

答案 2 :(得分:0)

我遇到了和你一样的问题,并为此进行了分析,发现原因是kubelet pleg机制并删除了' / var / lib / dockershim / sandbox'做了神奇的事。