部署到K8S的docker-registry出现问题“ CrashLoopBackOff”

时间:2019-10-31 11:41:50

标签: kubernetes kubectl

我对将docker-resgitry部署到K8S感到困惑。在这里,我将详细说明我的工作。希望你能给我任何想法。

我的K8S版本:

ii  kubeadm                               1.14.1-00                              amd64        Kubernetes Cluster Bootstrapping Tool
ii  kubectl                               1.14.1-00                              amd64        Kubernetes Command Line Tool
ii  kubelet                               1.14.1-00                              amd64        Kubernetes Node Agent
ii  kubernetes-cni                        0.7.5-00                               amd64        Kubernetes CNI

我做了什么?
创建自认证

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout cert.key -out cert.crt

将自我认证导入K8S

$ kubectl create secret tls registry-cert-secret --key cert.key --cert cert.crt
$ vim chart_values.yaml

ingress:
  enabled: true
  hosts:
    - registry.mgmt.home.local
  annotations:
    kubernetes.io/ingress.class: traefik
  tls:
    - secretName: registry-cert-secret
      hosts:
        - registry.mgmt.home.local

secrets:
  htpasswd: "admin:$2y$05$f95dCd6fRxQdDoPJ6mJIb.YMvR0qfhddSl3NSL1wCk1ZMl4JyFBDW"
  s3:
    accessKey: "admin"
    secretKey: "admin2019"

storage: s3
s3:
  region: us-east-1
  regionEndpoint: http://minio.home.local:9000
  secure: true
  bucket: registry

然后安装头盔

$ helm install stable/docker-registry -f chart_values.yaml --name docker-registry

NAME:   docker-registry
LAST DEPLOYED: Thu Oct 31 16:29:31 2019
NAMESPACE: default
STATUS: DEPLOYED

显示kubectl部署

$ kubectl get deployments

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
docker-registry   0/1     1            0           35m

获取豆荚

$ kubectl get pods --namespace default

NAME                               READY   STATUS             RESTARTS   AGE
docker-registry-6989668db6-78d84   0/1     **CrashLoopBackOff**   7          13m
docker-registry-6989668db6-jttrz   1/1     Terminating        0          37m

描述豆荚

$ kubectl describe pod docker-registry-6989668db6-78d84 --namespace default

Name:               docker-registry-6989668db6-78d84
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-worker-promox/10.102.11.223
Start Time:         Thu, 31 Oct 2019 18:03:13 +0800
Labels:             app=docker-registry
                    pod-template-hash=6989668db6
                    release=docker-registry
Annotations:        checksum/config: 89b20bb43a348d6b8dedacac583a596ccef4e570a935e7c5b464ba746eb88307
Status:             Running
IP:                 10.244.52.10
Controlled By:      ReplicaSet/docker-registry-6989668db6
Containers:
  docker-registry:
    Container ID:  docker://9a40c5e100711b122ddd78439c9fa21790f04f5a442b704140639f8fbfbd8929
    Image:         registry:2.7.1
    Image ID:      docker-pullable://registry@sha256:8004747f1e8cd820a148fb7499d71a76d45ff66bac6a29129bfdbfdc0154d146
    Port:          5000/TCP
    Host Port:     0/TCP
    Command:
      /bin/registry
      serve
      /etc/docker/registry/config.yml
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 31 Oct 2019 18:14:21 +0800
      Finished:     Thu, 31 Oct 2019 18:15:19 +0800
    Ready:          False
    Restart Count:  7
    Liveness:       http-get http://:5000/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:5000/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      REGISTRY_AUTH:                       htpasswd
      REGISTRY_AUTH_HTPASSWD_REALM:        Registry Realm
      REGISTRY_AUTH_HTPASSWD_PATH:         /auth/htpasswd
      REGISTRY_HTTP_SECRET:                <set to the key 'haSharedSecret' in secret 'docker-registry-secret'>  Optional: false
      REGISTRY_STORAGE_S3_ACCESSKEY:       <set to the key 's3AccessKey' in secret 'docker-registry-secret'>     Optional: false
      REGISTRY_STORAGE_S3_SECRETKEY:       <set to the key 's3SecretKey' in secret 'docker-registry-secret'>     Optional: false
      REGISTRY_STORAGE_S3_REGION:          us-east-1
      REGISTRY_STORAGE_S3_REGIONENDPOINT:  http://10.102.11.218:9000
      REGISTRY_STORAGE_S3_BUCKET:          registry
      REGISTRY_STORAGE_S3_SECURE:          true
    Mounts:
      /auth from auth (ro)
      /etc/docker/registry from docker-registry-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qfwkm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  auth:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  docker-registry-secret
    Optional:    false
  docker-registry-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      docker-registry-config
ingress:
    Optional:  false
  default-token-qfwkm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qfwkm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                        Message
  ----     ------     ----                    ----                        -------
  Normal   Scheduled  14m                     default-scheduler           Successfully assigned default/docker-registry-6989668db6-78d84 to k8s-worker-promox
  Normal   Pulled     12m (x3 over 14m)       kubelet, k8s-worker-promox  Container image "registry:2.7.1" already present on machine
  Normal   Created    12m (x3 over 14m)       kubelet, k8s-worker-promox  Created container docker-registry
  Normal   Started    12m (x3 over 14m)       kubelet, k8s-worker-promox  Started container docker-registry
  Normal   Killing    12m (x2 over 13m)       kubelet, k8s-worker-promox  Container docker-registry failed liveness probe, will be restarted
  Warning  Unhealthy  12m (x7 over 14m)       kubelet, k8s-worker-promox  Liveness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy  9m8s (x15 over 13m)     kubelet, k8s-worker-promox  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  BackOff    4m26s (x18 over 8m40s)  kubelet, k8s-worker-promox  Back-off restarting failed container

我看到了与活力和准备有关的问题。因此他们使Pod尝试多次启动/重新启动,然后获得“ Back-off”。

在进行故障排除之后,我认为这应该与DNS有关。但是,DNS应该没有任何问题。我试图查找K8S主机。

$ nslookup minio.home.local

Server:     10.102.11.201
Address:    10.102.11.201#53

Non-authoritative answer:
Name:   minio.home.local
Address: 10.101.12.213

11月1日更新。我进入另一个Pod,然后进入nslookup,此Pod无法找到minio.home.local。这与这个问题有关吗?我也尝试将minio.home.local替换为* .yaml中的IP,但是也遇到了同样的问题。

$ kubectl exec -it net-utils-5b5f89f777-2cwgq bash
root@net-utils-5b5f89f777-2cwgq:/#
root@net-utils-5b5f89f777-2cwgq:/#
root@net-utils-5b5f89f777-2cwgq:/#
root@net-utils-5b5f89f777-2cwgq:/# nslookup minio.home.local
Server:     10.96.0.10
Address:    10.96.0.10#53

** server can't find minio.skylab.local: NXDOMAIN

root@net-utils-5b5f89f777-2cwgq:/# ping minio.home.local
ping: unknown host

Googled / Github讨论,但我仍然无法解决。你有什么想法?

非常感谢您。

0 个答案:

没有答案