Kubernetes通过KubeAdm在OpenStack上使用KVM实例

时间:2018-11-21 19:37:17

标签: kubernetes kubeadm openstack-neutron openstack-horizon

我已经使用Horizo​​n接口成功部署了一个“工作中”的Kubernetes集群来创建Linux实例:

enter image description here

已根据https://kubernetes.io/docs/setup/independent/high-availability/

配置了主机

我现在可以说我有一个Kubernetes集群:

$ kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
kube-apiserver-1   Ready     master    1d        v1.12.2
kube-apiserver-2   Ready     master    1d        v1.12.2
kube-apiserver-3   Ready     master    1d        v1.12.2
kube-node-1        Ready     <none>    21h       v1.12.2
kube-node-2        Ready     <none>    21h       v1.12.2
kube-node-3        Ready     <none>    21h       v1.12.2
kube-node-4        Ready     <none>    21h       v1.12.2

然而,证明超出这一点是相当困难的。我无法创建可用的服务和coredns,这是必不可少的组件:

$ kubectl -n kube-system get pods
NAME                                       READY     STATUS             RESTARTS   AGE
coredns-576cbf47c7-4gdnc                   0/1       CrashLoopBackOff   288        23h
coredns-576cbf47c7-x4h4v                   0/1       CrashLoopBackOff   288        23h
kube-apiserver-kube-apiserver-1            1/1       Running            0          1d
kube-apiserver-kube-apiserver-2            1/1       Running            0          1d
kube-apiserver-kube-apiserver-3            1/1       Running            0          1d
kube-controller-manager-kube-apiserver-1   1/1       Running            3          1d
kube-controller-manager-kube-apiserver-2   1/1       Running            1          1d
kube-controller-manager-kube-apiserver-3   1/1       Running            0          1d
kube-flannel-ds-amd64-2zdtd                1/1       Running            0          20h
kube-flannel-ds-amd64-7l5mr                1/1       Running            0          20h
kube-flannel-ds-amd64-bmvs9                1/1       Running            0          1d
kube-flannel-ds-amd64-cmhkg                1/1       Running            0          1d
...

pod中的错误表明它无法访问kubernetes服务:

$ kubectl -n kube-system logs coredns-576cbf47c7-4gdnc
E1121 18:04:48.928055       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:04:48.928688       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:04:48.928917       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:19.929869       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:19.930819       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:19.931517       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:50.932159       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:50.932722       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:05:50.933179       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
2018/11/21 18:06:07 [INFO] SIGTERM: Shutting down servers then terminating
E1121 18:06:21.933058       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:06:21.934010       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1121 18:06:21.935107       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

$ kubectl -n kube-system describe pod / coredns-576cbf47c7-dk7sh

...
Events:
  Type     Reason     Age                From                  Message
  ----     ------     ----               ----                  -------
  Normal   Scheduled  25m                default-scheduler     Successfully assigned kube-system/coredns-576cbf47c7-dk7sh to kube-node-3
  Normal   Pulling    25m                kubelet, kube-node-3  pulling image "k8s.gcr.io/coredns:1.2.2"
  Normal   Pulled     25m                kubelet, kube-node-3  Successfully pulled image "k8s.gcr.io/coredns:1.2.2"
  Normal   Created    20m (x3 over 25m)  kubelet, kube-node-3  Created container
  Normal   Killing    20m (x2 over 22m)  kubelet, kube-node-3  Killing container with id docker://coredns:Container failed liveness probe.. Container will be killed and recreated.
  Normal   Pulled     20m (x2 over 22m)  kubelet, kube-node-3  Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  Normal   Started    20m (x3 over 25m)  kubelet, kube-node-3  Started container
  Warning  Unhealthy  4m (x36 over 24m)  kubelet, kube-node-3  Liveness probe failed: HTTP probe failed with statuscode: 503
  Warning  BackOff    17s (x22 over 8m)  kubelet, kube-node-3  Back-off restarting failed container

kubernetes服务在那里,并且似乎已正确配置:

$ kubectl得到svc

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   23h

$ kubectl描述svc / kubernetes

Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.96.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         192.168.5.19:6443,192.168.5.24:6443,192.168.5.29:6443
Session Affinity:  None
Events:            <none>

$ kubectl获取端点

NAME         ENDPOINTS                                               AGE
kubernetes   192.168.5.19:6443,192.168.5.24:6443,192.168.5.29:6443   23h

我有点怀疑我在网络层中缺少某些东西,并且这个问题与Neutron有关。关于如何使用其他工具安装Kubernetes以及如何在OpenStack中安装Kubernetes的方法有很多,但是我还没有找到一个指南来说明如何使用Horizo​​n接口创建KVM以及如何处理安全组和网络问题来安装Kubernetes。顺便说一下,所有IPv4 / TCP端口都在主节点和节点之间打开。

有没有人提供指导来说明这种情况?

1 个答案:

答案 0 :(得分:0)

这里的问题是受污染的etcd集群。当我重建EXTERNAL etcd集群并按照以下说明从头开始时:https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd所有项目均按预期工作。似乎没有可用的工具来为法兰绒吊舱网络重置etcd条目。