我一直在尝试在 7 个节点(3 个控制,4 个工作器)测试集群上安装 istio 1.10.0,但每个 VM CentOS 7.9.2009,我都在我的 VMware ESXi 6.7.0 Update 3 home 上运行实验室,这让我发疯。我在 /etc/kubernetes/manifests/kube-vip.yaml 中使用“kube-vip:latest”来平衡三个控制节点,指向 kubernetes-lb 并且,当我尝试安装 istio per official instructions,它在“安装出口/入口网关”时超时,并在旋转 5 分钟后失败。
我在任何控制平面/主节点的 /var/log/messages 中看到了大量这样的错误。
kube-api-access-4phqm\" (UniqueName: \"kubernetes.io/projected/26b8bd27-e5ca-46a3-b7b2-87dcf8d515bd-kube-api-access-4phqm\") pod \"kube-proxy-mtzjn\" (UID: \"26b8bd27-e5ca-46a3-b7b2-87dcf8d515bd\") : failed to fetch token: the API server does not have TokenRequest endpoints enabled"
如果我正在阅读有关 kube-vip in hybrid mode as static pods correctly 的信息,它会说它不支持“kubernetes 令牌”。那会是我似乎无法开始工作的“TokenRequest”吗? kube-vip 的文档质量太差,我似乎无法让它在 DaemonSet 模式下工作。
在打开 80 个 Chrome 标签页的过程中,我发现了 this Stack overflow question touching on the same question。它提到了以下命令,如果它不返回任何内容,则不支持第三方 JWT 令牌,并且确实不返回任何内容。
kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'
同一个链接说要对 kube-apiserver.yaml 进行一些更改,但我尝试过的值没有产生预期的结果。
我在这里完全遗漏了什么吗?我是否需要启动一个全新的 K8S 集群并从头开始?如果是这样,除了 kube-vip 之外,还有其他人对控制平面的负载平衡有什么建议吗?
这里有一些关于我的测试集群的信息。 kube-vip 似乎工作正常,因为如果我通过 kube-vip pod 重新启动或什至硬关闭为 VIP 提供服务的节点,该 pod 具有 kubernetes-lb 的内部 DNS 记录,它可以毫无问题地切换到另一个控制平面/主节点。
kubernetes-lb 192.168.0.29
[16:01:23] [root@kubernetes1 helm-chart] kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes1 Ready control-plane,master 328d v1.21.1 192.168.0.30 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes2 Ready control-plane,master 328d v1.21.1 192.168.0.31 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes3 Ready control-plane,master 328d v1.21.1 192.168.0.32 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes4 Ready worker 328d v1.19.11 192.168.0.33 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes5 Ready worker 328d v1.19.11 192.168.0.34 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes6 Ready worker 328d v1.19.11 192.168.0.35 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kubernetes7 Ready worker 328d v1.19.11 192.168.0.36 <none> CentOS Linux 7 (Core) 3.10.0-1160.25.1.el7.x86_64 docker://20.10.6
kube-vip.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: vip_interface
value: ens224
- name: port
value: "6443"
- name: vip_cidr
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: svc_enable
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: vip_address
value: 192.168.0.28
image: plndr/kube-vip:0.3.4
imagePullPolicy: Always
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
- SYS_TIME
- NET_BROADCAST
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
name: kubeconfig
status: {}
kubectl -n kube-system edit cm kubeadm-config -o yaml 的输出
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: kubernetes-lb:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.21.1
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
ClusterStatus: |
apiEndpoints:
kubernetes1:
advertiseAddress: 192.168.0.30
bindPort: 6443
kubernetes2:
advertiseAddress: 192.168.0.31
bindPort: 6443
kubernetes3:
advertiseAddress: 192.168.0.32
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterStatus
kind: ConfigMap
metadata:
creationTimestamp: "2020-07-06T02:01:54Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "17302171"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: a7d3beae-a3c5-49b5-b1f0-7e9007b181fe