Kubelet无法获得" /system.slice/docker.service"

时间:2017-11-19 12:17:11

标签: docker kubernetes cgroups kubelet

问题

kubectl(CentOS 7上的1.8.3)错误按摩实际意味着什么以及如何解决。

  

Nov 19 22:32:24 master kubelet [4425]:E1119 22:32:24.269786 4425 summary.go:92]无法获得
" /system.slice/kubelet的系统容器统计信息.service":未能获得" /system.slice/kubelet.service"的cgroup统计数据:未能得到联机   11月19日22:32:24 master kubelet [4425]:E1119 22:32:24.269802 4425 summary.go:92]无法获得" /system.slice/docker.service"的系统容器统计信息:失败获取" /system.slice/docker.service"的cgroup统计数据:未能获得容器

研究

发现相同的错误,并按照以下方法更新kubelet的服务单元,但不起作用。

/etc/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

背景

按照Install kubeadm设置Kubernetes群集。文档Installing Docker中的部分说明了如下对齐cgroup驱动程序。

  

注意:确保kubelet使用的cgroup驱动程序与Docker使用的驱动程序相同。为确保兼容性,您可以更新Docker,如下所示:

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

但这样做导致docker服务无法启动:

  

无法使用文件/etc/docker/daemon.json配置Docker守护程序:以下指令同时指定为标记&#34;。
  11月19日16:55:56 localhost.localdomain systemd 1:docker.service:主进程退出,代码=退出,状态= 1 / FAILURE。

Maser节点已准备就绪,所有系统pod都在运行。

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          39m
kube-system   kube-apiserver-master            1/1       Running   0          39m
kube-system   kube-controller-manager-master   1/1       Running   0          39m
kube-system   kube-dns-545bc4bfd4-mqqqk        3/3       Running   0          40m
kube-system   kube-flannel-ds-fclcs            1/1       Running   2          13m
kube-system   kube-flannel-ds-hqlnb            1/1       Running   0          39m
kube-system   kube-proxy-t7z5w                 1/1       Running   0          40m
kube-system   kube-proxy-xdw42                 1/1       Running   0          13m
kube-system   kube-scheduler-master            1/1       Running   0          39m

环境

使用法兰绒的CentOS上的Kubernetes 1.8.3。

$ kubectl version -o json | python -m json.tool
{
    "clientVersion": {
        "buildDate": "2017-11-08T18:39:33Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    },
    "serverVersion": {
        "buildDate": "2017-11-08T18:27:48Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    }
}


$ kubectl describe node master
Name:               master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=master
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"86:b6:7a:d6:7b:b3"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=10.0.2.15
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             node-role.kubernetes.io/master:NoSchedule
CreationTimestamp:  Sun, 19 Nov 2017 22:27:17 +1100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:32:24 +1100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.10
  Hostname:    master
Capacity:
 cpu:     1
 memory:  3881880Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  3779480Ki
 pods:    110
System Info:
 Machine ID:                 ca0a351004604dd49e43f8a6258ddd77
 System UUID:                CA0A3510-0460-4DD4-9E43-F8A6258DDD77
 Boot ID:                    e9060efa-42be-498d-8cb8-8b785b51b247
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
PodCIDR:                     10.244.0.0/24
ExternalID:                  master
Non-terminated Pods:         (7 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-mqqqk         260m (26%)    0 (0%)      110Mi (2%)       170Mi (4%)
  kube-system                kube-flannel-ds-hqlnb             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-t7z5w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-master             100m (10%)    0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  810m (81%)    0 (0%)      110Mi (2%)       170Mi (4%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 38m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  38m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    37m (x7 over 38m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  Starting                 37m                kube-proxy, master  Starting kube-proxy.
  Normal  Starting                 32m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  32m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    32m                kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  32m                kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    32m                kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  NodeNotReady             32m                kubelet, master     Node master status is now: NodeNotReady
  Normal  NodeReady                32m                kubelet, master     Node master status is now: NodeReady

2 个答案:

答案 0 :(得分:0)

此问题的原因是节点docker版本与kubernetes需要的docker版本不同。

您可以直接卸载docker,在每个节点上重新安装指定版本的docker,下一步重新启动docker,然后节点将立即恢复联机。

由于该物理文件夹仍然存在,因此该法官安装的docker-images和pod不会受到影响。

yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
systemctl enable docker
systemctl start docker

答案 1 :(得分:-1)

我遇到了完全相同的问题,我已经如上所述向ExecStart添加了参数,但仍然遇到相同的错误。然后,我做了kubeadm resetsystemctl daemon-reload并重新创建了群集。这个错误似乎消失了。现在测试......