kubeadm init更改imageRepository

时间:2018-10-15 12:53:47

标签: kubernetes kubeadm

我正在尝试启动kubernetes集群,但是使用不同的URL来使kubernetes提取其图像。 AFAIK,这只能通过配置文件来实现。

我对配置文件不熟悉,所以我从一个简单的文件开始:

apiVersion: kubeadm.k8s.io/v1alpha2
imageRepository: my.internal.repo:8082
kind: MasterConfiguration
kubernetesVersion: v1.11.3

并运行命令 kubeadm init --config file.yaml 一段时间后,它失败并显示以下错误:

[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I1015 12:05:54.066140   27275 kernel_validator.go:81] Validating kernel version
I1015 12:05:54.066324   27275 kernel_validator.go:96] Validating kernel config
        [WARNING Hostname]: hostname "kube-master-0" could not be reached
        [WARNING Hostname]: hostname "kube-master-0" lookup kube-master-0 on 10.11.12.246:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.5.189]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master-0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master-0 localhost] and IPs [10.10.5.189 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

                Unfortunately, an error has occurred:
                        timed out waiting for the condition

                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - my.internal.repo:8082/kube-apiserver-amd64:v1.11.3
                                - my.internal.repo:8082/kube-controller-manager-amd64:v1.11.3
                                - my.internal.repo:8082/kube-scheduler-amd64:v1.11.3
                                - my.internal.repo:8082/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.

                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'

                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

我用 systemctl status kubelet 检查了kubelet的状态,并且它正在运行。

成功尝试通过以下方式手动拉出图像:

docker pull my.internal.repo:8082/kubee-apiserver-amd64:v1.11.3

但是,' docker ps -a返回'没有容器。

journalctl -xeu kubelet显示出很多连接被拒绝,并向k8s.io发出了我正在努力了解根错误的请求。

有什么想法吗?

谢谢!

修改1: 我试图手动打开端口,但没有任何改变。 [centos @ kube-master-0〜] $ sudo Firewall-cmd --zone = public --list-ports 6443 / TCP 5000 / TCP 2379-2380 / TCP 10250-10252 / TCP

我也将kube版本从1.11.3更改为1.12.1,但是没有任何改变。

修改2: 我意识到kubelet试图从k8s.io存储库中提取数据,这意味着我仅更改了kubeadm内部存储库。我需要对kubelet做同样的事情。

Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.108764   24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to...on refused
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.110539   24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v...on refused

有什么想法吗?

2 个答案:

答案 0 :(得分:1)

由于使用注释以正确的方式格式化文本不可用,我将发表我的评论作为答案:

如果在群集初始化之前尝试下载映像会怎样? 示例:

master-config.yaml:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3

命令:

  

root @ kube-master-01:〜#kubeadm配置映像拉   --config =“ / root / master-config.yaml”

输出:

[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.11.3
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.2.2

PS:尝试之前添加imageRepository:my.internal.repo:8082。

答案 1 :(得分:1)

您已解决了一半的问题,也许最终的解决方案是编辑kubelet/etc/systemd/system/kubelet.service.d/10-kubeadm.conf)初始化文件。您需要设置--pod_infra_container_image参数,该参数引用通过内部存储库提取的暂停容器映像。图像名称将如下所示:my.internal.repo:8082/pause:[version]

原因是kubelet无法获得新的图像标签来引用它。