kubeadm初始化失败,并显示:x509:证书由未知授权机构签署

时间:2019-04-22 10:39:58

标签: kubernetes ansible vagrant

https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/之后,尝试在Mac上使用vagrant设置Kubernetes。使用Ansible Playbook步骤:

 - name: Initialize the Kubernetes cluster using kubeadm
    command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16

我收到错误消息:

  致命:[k8s-master]:失败! => {“ changed”:true,“ cmd”:[“ kubeadm”,   “ init”,“ --apiserver-advertise-address = 192.168.50.10”,   “ --apiserver-cert-extra-sans = 192.168.50.10”,“-node-name”,   “ k8s-master”,“-pod-network-cidr = 192.168.0.0 / 16”],“ delta”:   “ 0:00:03.446240”,“结束”:“ 2019-04-22 08:32:03.655520”,“ msg”:   “非零返回码”,“ rc”:1,“开始”:“ 2019-04-22   08:32:00.209280“,” stderr“:” I0422 08:32:00.877733 5038   version.go:96]无法从互联网获取Kubernetes版本:   无法获取URL \“ https://dl.k8s.io/release/stable-1.txt \”:获取   https://dl.k8s.io/release/stable-1.txt:x509:证书签署人   未知权限\ nI0422 08:32:00.877767 5038 version.go:97]   还原到本地客户端版本:v1.14.1 \ n \ t [警告   IsDockerSystemdCheck]:检测到\“ cgroupfs \”为Docker cgroup   司机。推荐的驱动程序是\“ systemd \”。请按照指南   在https://kubernetes.io/docs/setup/cri/ \ n执行阶段   预检:[预检]发生一些致命错误:\ n \ t [错误   ImagePull]:无法拉取图像k8s.gcr.io/kube-apiserver:v1.14.1:   输出:来自守护程序的错误响应:获取https://k8s.gcr.io/v2/:x509:   由未知权限签署的证书\ n,

所以我尝试手动运行Kubeadm init命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors all
I0422 08:51:06.815553    6537 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: x509: certificate signed by unknown authority
I0422 08:51:06.815587    6537 version.go:97] falling back to the local client version: v1.14.1

我使用--ignore-preflight-errors all尝试了相同的命令

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors all
I0422 08:51:35.741958    6809 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: x509: certificate signed by unknown authority
I0422 08:51:35.742030    6809 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
---
- hosts: all
  become: true
  tasks:
  - name: Install packages that allow apt to be used over HTTPS
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg-agent
      - software-properties-common

  - name: Add an apt signing key for Docker
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present

  - name: Add apt repository for stable version
    apt_repository:
      repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
      state: present

  - name: Install docker and its dependecies
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - docker-ce
      - docker-ce-cli
      - containerd.io
    notify:
      - docker status

  - name: Add vagrant user to docker group
    user:
      name: vagrant
      group: docker
/Initialize
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.10 192.168.50.10]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

基于宝贵的建议,我尝试了以下命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16  --kubernetes-version="v1.14.1" --ignore-preflight-errors all --cert-dir=/etc/ssl/cert

但收到错误响应:

  

[init]使用Kubernetes版本:v1.14.1 [preflight]运行   飞行前检查[警告   FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]:   /etc/kubernetes/manifests/kube-apiserver.yaml已经存在[警告   FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]:   /etc/kubernetes/manifests/kube-controller-manager.yaml已经存在     [警告   FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]:   /etc/kubernetes/manifests/kube-scheduler.yaml已经存在[警告   FileAvailable--etc-kubernetes-manifests-etcd.yaml]:   /etc/kubernetes/manifests/etcd.yaml已经存在[警告   [IsDockerSystemdCheck]:将“ cgroupfs”检测为Docker cgroup   司机。推荐的驱动程序是“ systemd”。请按照指南   在https://kubernetes.io/docs/setup/cri/处[警告端口10250]:端口   10250正在使用中[预检]拉动设置图像所需的图像。   Kubernetes集群[预检]这可能需要一两分钟,   根据您的互联网连接速度[飞行前]您可以   还可以使用'kubeadm配置映像预先执行此操作   拉[警告图像拉]:拉图像失败   k8s.gcr.io/kube-apiserver:v1.14.1:输出:来自守护程序的错误响应:   获取https://k8s.gcr.io/v2/:x509:未知者签名的证书   权限,错误:退出状态1 [WARNING ImagePull]:提取失败   图片k8s.gcr.io/kube-controller-manager:v1.14.1:输出:错误   守护程序的响应:获取https://k8s.gcr.io/v2/:x509:证书   由未知权限签名,错误:退出状态1 [警告   ImagePull]:无法拉取图像k8s.gcr.io/kube-scheduler:v1.14.1:   输出:来自守护程序的错误响应:获取https://k8s.gcr.io/v2/:x509:   证书由未知授权机构签名,错误:退出状态1     [WARNING ImagePull]:无法提取图像   k8s.gcr.io/kube-proxy:v1.14.1:输出:来自守护程序的错误响应:Get   https://k8s.gcr.io/v2/:x509:未知授权机构签署的证书   ,错误:退出状态1 [WARNING ImagePull]:无法提取图像   k8s.gcr.io/pause:3.1:输出:来自守护程序的错误响应:Get   https://k8s.gcr.io/v2/:x509:未知授权机构签署的证书   ,错误:退出状态1 [WARNING ImagePull]:无法提取图像   k8s.gcr.io/etcd:3.3.10:输出:来自守护程序的错误响应:Get   https://k8s.gcr.io/v2/:x509:未知授权机构签署的证书   ,错误:退出状态1 [WARNING ImagePull]:无法提取图像   k8s.gcr.io/coredns:1.3.1:输出:来自守护程序的错误响应:Get   https://k8s.gcr.io/v2/:x509:未知授权机构签署的证书   ,错误:退出状态1 [kubelet-start]编写kubelet环境   带有标志的文件到文件“ /var/lib/kubelet/kubeadm-flags.env”   [kubelet-start]将kubelet配置写入文件   “ /var/lib/kubelet/config.yaml” [kubelet-start]激活kubelet   服务[证书]使用certificateDir文件夹“ / etc / ssl / cert” [证书]   生成“ ca”证书和密钥[certs]生成“ apiserver”   证书和密钥[certs] apiserver服务证书已为DNS签名   名称[k8s-master kubernetes kubernetes.default kubernetes.default.svc   kubernetes.default.svc.cluster.local]和IP [10.96.0.1 192.168.50.10   192.168.50.10] [证书]生成“ apiserver-kubelet-client”证书和密钥[certs]生成“ front-proxy-ca”证书   和密钥[certs]生成“前代理客户端”证书和密钥   [certs]生成“ etcd / ca”证书和密钥[certs]生成   “ apiserver-etcd-client”证书和密钥[certs]生成   “ etcd /服务器”证书和密钥[certs] etcd /服务器提供的证书为   为DNS名称[k8s-master localhost]和IP [192.168.50.10]签名   127.0.0.1 :: 1] [证书]生成“ etcd / peer”证书和密钥[certs] etcd / peer服务证书已为DNS名称签名[k8s-master   本地主机]和IP [192.168.50.10 127.0.0.1 :: 1] [证书]正在生成   “ etcd / healthcheck-client”证书和密钥[certs]生成“ sa”   密钥和公钥[kubeconfig]使用kubeconfig文件夹   “ / etc / kubernetes”错误执行阶段kubeconfig / admin:一个kubeconfig   文件“ /etc/kubernetes/admin.conf”已经存在,但输入错误   CA证书

命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16  --kubernetes-version="v1.14.1" --ignore-preflight-errors all --cert-dir=/etc/kubernetes/pki

错误跟踪:

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

此外:

root@k8s-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-04-24 00:13:07 UTC; 9min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 9746 (kubelet)
    Tasks: 16
   Memory: 27.7M
      CPU: 9.026s
   CGroup: /system.slice/kubelet.service
           └─9746 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.652197    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.711938    9746 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://192.168.50.10:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/k8s-master?timeout=10s: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.752613    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.818002    9746 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://192.168.50.10:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.859028    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.960182    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.018188    9746 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.50.10:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.061118    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.169412    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.250762    9746 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.50.10:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
root@k8s-master:~#

查看所有docker容器:

root@k8s-master:~# docker ps -a
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS                       PORTS               NAMES
a22812e3c702        20a2d7035165                         "/usr/local/bin/kube…"   4 minutes ago       Up 4 minutes                                     k8s_kube-proxy_kube-proxy-t7nq9_kube-system_20f8d57d-6628-11e9-b099-080027ee87c4_0
b2a89f8418bb        k8s.gcr.io/pause:3.1                 "/pause"                 4 minutes ago       Up 4 minutes                                     k8s_POD_kube-proxy-t7nq9_kube-system_20f8d57d-6628-11e9-b099-080027ee87c4_0
6c327b9d36f2        cfaa4ad74c37                         "kube-apiserver --ad…"   5 minutes ago       Up 5 minutes                                     k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_1
a1f1b3396810        k8s.gcr.io/etcd                      "etcd --advertise-cl…"   5 minutes ago       Up 5 minutes                                     k8s_etcd_etcd-k8s-master_kube-system_64388d0f4801f9b4aa01c8b7505258c9_0
0a3619df6a61        k8s.gcr.io/kube-controller-manager   "kube-controller-man…"   5 minutes ago       Up 5 minutes                                     k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_07bbd1f39b3ac969cc18015bbdce8871_0
ffb435b6adfe        k8s.gcr.io/kube-apiserver            "kube-apiserver --ad…"   5 minutes ago       Exited (255) 5 minutes ago                       k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_0
ffb463d4cbc6        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_etcd-k8s-master_kube-system_64388d0f4801f9b4aa01c8b7505258c9_0
a9672f233952        k8s.gcr.io/kube-scheduler            "kube-scheduler --bi…"   5 minutes ago       Up 5 minutes                                     k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0
2bc0ab68870b        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-controller-manager-k8s-master_kube-system_07bbd1f39b3ac969cc18015bbdce8871_0
667ae6988f2b        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_0
b4e6c37f5300        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0

3 个答案:

答案 0 :(得分:2)

从init命令中删除以下参数

-节点名称k8s-master

在以下参数中包含以部署所需的kubernetes版本

--kubernetes-version v1.14.1

答案 1 :(得分:0)

尝试删除$ HOME / .kube目录,并在kubeadm初始化之后再次输入以下命令:
mkdir -p $ HOME / .kube
须藤cp -i /etc/kubernetes/admin.conf $ HOME / .kube / config
须藤chown $ {id -u):$ {id -g)$ HOME / .kube / config

答案 2 :(得分:0)

尽管“ kubeadm init失败:x509:证书由未知机构签名”,尽管我非常感谢所有有用的宝贵帮助,但Ansible剧本“ kubernetes-”中的以下内容解决了x509证书问题。 setup / master-playbook.yml“:

  - name: copy pem file
    copy: src=BCPSG.pem dest=/etc/ssl/certs

  - name: Update cert index
    shell: /usr/sbin/update-ca-certificates 

其中BCPSG.pm是我复制到Vagrantfile所在目录的证书,该目录为@ kubernetes-setup目录。回到https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/