Kubernetes - 在CentOS7中实施Kubernetes Master HA解决方案

时间:2017-06-30 03:29:49

标签: kubernetes haproxy kubectl

我正在为CentOS7环境中的Kubernetes主节点实施HA解决方案。

我的环境如下:

K8S_Master1 : 172.16.16.5
K8S_Master2 : 172.16.16.51
HAProxy     : 172.16.16.100
K8S_Minion1 : 172.16.16.50


etcd Version: 3.1.7
Kubernetes v1.5.2
CentOS Linux release 7.3.1611 (Core)

我的etcd群集已正确设置并处于工作状态。

[root@master1 ~]# etcdctl cluster-health
member 282a4a2998aa4eb0 is healthy: got healthy result from http://172.16.16.51:2379
member dd3979c28abe306f is healthy: got healthy result from http://172.16.16.5:2379
member df7b762ad1c40191 is healthy: got healthy result from http://172.16.16.50:2379

Master1的我的K8S配置是:

[root@master1 ~]# cat /etc/kubernetes/apiserver 
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

[root@master1 ~]# cat /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"

[root@master1 ~]# cat /etc/kubernetes/controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect"

[root@master1 ~]# cat /etc/kubernetes/scheduler 
KUBE_SCHEDULER_ARGS="--leader-elect"

至于Master2,我已将其配置为:

[root@master2 kubernetes]# cat apiserver 
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

[root@master2 kubernetes]# cat config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"

[root@master2 kubernetes]# cat scheduler 
KUBE_SCHEDULER_ARGS=""

[root@master2 kubernetes]# cat controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS=""

请注意--leader-elect仅在Master1上配置,因为我希望Master1成为领导者。

我的HA代理配置很简单:

frontend K8S-Master
    bind 172.16.16.100:8080
    default_backend K8S-Master-Nodes

backend K8S-Master-Nodes
    mode        http
    balance     roundrobin
    server      master1 172.16.16.5:8080 check
    server      master2 172.16.16.51:8080 check

现在我指示我的小兵连接到Load Balancer IP而不是直接连接到主IP。

Minion的配置是:

[root@minion kubernetes]# cat /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.16.16.100:8080"

在两个主节点上,我将minion / node状态视为Ready

[root@master1 ~]# kubectl get nodes
NAME           STATUS    AGE
172.16.16.50   Ready     2h

[root@master2 ~]# kubectl get nodes
NAME           STATUS    AGE
172.16.16.50   Ready     2h

我使用以下方法设置了一个示例nginx pod:

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

我使用:

Master1上创建了复制控制器
[root@master1 ~]# kubectl create -f nginx.yaml

在两个主节点上,我都能看到创建的pod。

[root@master1 ~]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-jwpxd   1/1       Running   0          29m
nginx-q613j   1/1       Running   0          29m

[root@master2 ~]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-jwpxd   1/1       Running   0          29m
nginx-q613j   1/1       Running   0          29m

现在从逻辑上思考,如果我要删除Master1节点并删除Master2上的广告连播,Master2应重新创建广告连播。所以这就是我的工作。

Master1

[root@master1 ~]# systemctl stop kube-scheduler ; systemctl stop kube-apiserver ; systemctl stop kube-controller-manager

Master2

[root@slave1 kubernetes]# kubectl delete po --all
pod "nginx-l7mvc" deleted
pod "nginx-r3m58" deleted

现在Master2应该创建pod,因为Replication Controller仍处于运行状态。但新的Pods陷入了困境:

[root@master2 kubernetes]# kubectl get po
NAME          READY     STATUS        RESTARTS   AGE
nginx-l7mvc   1/1       Terminating   0          13m
nginx-qv6z9   0/1       Pending       0          13m
nginx-r3m58   1/1       Terminating   0          13m
nginx-rplcz   0/1       Pending       0          13m

我已经等了很长时间,但豆荚仍处于这种状态。

但是当我重新启动Master1上的服务时:

[root@master1 ~]# systemctl start kube-scheduler ; systemctl start kube-apiserver ; systemctl start kube-controller-manager

然后我看到Master1上的进展:

NAME          READY     STATUS              RESTARTS   AGE
nginx-qv6z9   0/1       ContainerCreating   0          14m
nginx-rplcz   0/1       ContainerCreating   0          14m

[root@slave1 kubernetes]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-qv6z9   1/1       Running   0          15m
nginx-rplcz   1/1       Running   0          15m

为什么没有Master2重新创建广告连播?这是我想弄清楚的混乱。我已经花了很长时间来设置一个功能齐全的HA设置,但似乎只有在我能弄清楚这个难题时才会出现。

1 个答案:

答案 0 :(得分:0)

在我看来,错误来自Master2没有启用--leader-elect标志的事实。只能有一个schedulercontroller进程同时运行,这就是--leader-elect的原因。此标志的目的是让它们“竞争”以查看在给定时间哪个schedulercontroller进程处于活动状态。由于您没有在两个主节点中设置标志,因此有两个schedulercontroller进程处于活动状态,因此您遇到了冲突。为了解决这个问题,我建议你在所有主节点中启用这个标志。

更重要的是,根据k8s文档https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/#best-practices-for-replicating-masters-for-ha-clusters

  

请勿使用具有两个主副本的群集。对两个副本群集达成共识需要在更改持久状态时运行两个副本。因此,需要两个副本,并且任何副本的故障都会将群集转变为多数故障状态。因此,就HA而言,双副本集群在单个副本集群方面较差。