Kube-dns始终处于待定状态

时间:2018-03-29 11:41:17

标签: kubernetes kube-dns

我已经在这个链接上的virt-manager vm上部署了kubernetes

https://kubernetes.io/docs/setup/independent/install-kubeadm/

当我将另一个vm加入群集时,我发现kube-dns处于暂挂状态。

root@ubuntu1:~# kubectl get pods --all-namespaces 
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu1                      1/1       Running   0          7m
kube-system   kube-apiserver-ubuntu1            1/1       Running   0          8m
kube-system   kube-controller-manager-ubuntu1   1/1       Running   0          8m
kube-system   kube-dns-86f4d74b45-br6ck         0/3       Pending   0          8m
kube-system   kube-proxy-sh9lg                  1/1       Running   0          8m
kube-system   kube-proxy-zwdt5                  1/1       Running   0          7m
kube-system   kube-scheduler-ubuntu1            1/1       Running   0          8m


root@ubuntu1:~# kubectl --namespace=kube-system describe pod kube-dns-86f4d74b45-br6ck
Name:           kube-dns-86f4d74b45-br6ck
Namespace:      kube-system
Node:           <none>
Labels:         k8s-app=kube-dns
                pod-template-hash=4290830601
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/kube-dns-86f4d74b45
Containers:
  kubedns:
    Image:       k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    Ports:       10053/UDP, 10053/TCP, 10055/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
  dnsmasq:
    Image:       k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    Ports:       53/UDP, 53/TCP
    Host Ports:  0/UDP, 0/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
  sidecar:
    Image:      k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    Port:       10054/TCP
    Host Port:  0/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-4fjt4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-4fjt4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  6m (x7 over 7m)   default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3s (x19 over 6m)  default-scheduler  0/2 nodes are available: 2 node(s) were not ready.

任何人都可以帮我解决这个问题并找到实际问题吗?

任何帮助都会被大量使用

提前致谢。

3 个答案:

答案 0 :(得分:2)

除了@justcompile编写的内容之外,您还需要至少 2个CPU内核才能从 kube-system 命名空间运行所有pod而不会出现问题。

您需要验证该盒子上有多少资源,并将其与每个Pod所做的CPU预留进行比较。

例如,在您提供的输出中,我可以看到您的DNS服务尝试为10%的CPU核心进行保留:

Requests:
  cpu:      100m

您可以使用以下方法检查每个已部署的pod及其CPU预留:

kubectl describe pods --namespace=kube-system

答案 1 :(得分:0)

首先,如果你运行kubectl get nodes,这会显示两个/所有节点都处于就绪状态吗?

如果是这样,我遇到了这个问题并发现在检查kubectl get events时,它显示pod正在失败,因为它们至少需要2个CPU才能运行。

由于我最初通过VirtualBox在旧Macbook Pro上运行此操作,我不得不放弃并使用AWS(其他云平台当然可用),以便为每个节点获取多个CPU。

答案 2 :(得分:0)

您的原因中的

Category输出无法查看任何有关pods网络的内容。

因此您可以选择网络实施并且必须先安装Pod网络,然后才能完全部署kube-dns。详细信息kube-dns is stuck in the Pending stateinstall pod network solution