我通过kubeadm设置了集群。在最后一步,我执行kubeadm init --config kubeadm.conf --v=5
。我收到有关clusterIp值的错误。这是输出的一部分:
I0220 00:16:27.625920 31630 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0220 00:16:27.947941 31630 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0220 00:16:27.949398 31630 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons]: Migrating CoreDNS Corefile
I0220 00:16:28.447420 31630 dns.go:381] the CoreDNS configuration has been migrated and applied: .:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
.
I0220 00:16:28.447465 31630 dns.go:382] the old migration has been saved in the CoreDNS ConfigMap under the name [Corefile-backup]
I0220 00:16:28.447486 31630 dns.go:383] The changes in the new CoreDNS Configuration are as follows:
Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.10.0.10": field is immutable
unable to create/update the DNS service
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createDNSService
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:323
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:305
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
我的配置文件如下:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.5.151
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master02
# taints:
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- "172.16.5.150"
- "172.16.5.151"
- "172.16.5.152"
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- "https://172.16.5.150:2379"
- "https://172.16.5.151:2379"
- "https://172.16.5.152:2379"
caFile: /etc/k8s/pki/ca.pem
certFile: /etc/k8s/pki/etcd.pem
keyFile: /etc/k8s/pki/etcd.key
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.10.0.0/16
podSubnet: 192.168.0.0/16
scheduler: {}
我检查了kubeadm生成的kube-apiserver.yaml。 --service-cluster-ip-range = 10.10.0.0 / 16设置包含10.10.0.10 您可以在下面看到:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.16.5.151
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/k8s/pki/ca.pem
- --etcd-certfile=/etc/k8s/pki/etcd.pem
- --etcd-keyfile=/etc/k8s/pki/etcd.key
- --etcd-servers=https://172.16.5.150:2379,https://172.16.5.151:2379,https://172.16.5.152:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.10.0.0/16
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.16.5.151
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/k8s/pki
name: etcd-certs-0
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/k8s/pki
type: DirectoryOrCreate
name: etcd-certs-0
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
如上所示。所有service-ip-range已设置为10.10.0.0/16。奇怪的是,当我执行“ kubectl get svc”时,我得到的kubernetes clusterip是10.96.0.1
[root@master02 manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d3h
表示默认的service-ip-range是:10.96.0.0/16。而且我修改的内容行不通。有谁知道如何自定义service-ip-range范围。以及如何解决我的问题?
答案 0 :(得分:0)
因为这个节点我之前作为一个节点加入了集群,因为这个节点我之前作为一个节点加入了集群。后来我用“ kubeadm reset”命令对其进行了重置。重置之后,我作为集群的主角色加入了它。 。所以我在上面的问题中得到了错误。 该错误是因为我重置之前clusterip的范围已记录在etcd群集中。并且“ kubeadm reset”命令不会清除etcd中的数据。因此,新的clusterip定义与原始定义冲突。因此解决方案是清除etcd中的数据,然后再次对其进行重置。 (由于我构建的集群是测试集群,因此我直接清理了etcd。在生产环境中请小心)
答案 1 :(得分:0)
将此答案作为社区Wiki发布,以扩展并解释根本原因。
启动kubeadm
且不指定任何标志时,$ kubeadm init
会创建具有默认值的kubeadm
簇。您可以检入Kubernetes docs标志,这些标志可以在初始化期间指定,并且是默认值。
--service-cidr
字符串默认值:“ 10.96.0.0/12” 为服务VIP使用其他IP地址范围。
这就是默认kubernetes
服务将10.96.0.1
用作ClusterIP
的原因。
这里OP也想使用自己的配置。
--config
字符串kubeadm配置文件的路径。
可以here找到整个初始化工作流程。
正如Kubernetes文档所举例说明的Kubeadm reset
执行尽最大努力恢复kubeadm init或kubeadm联接所做的更改。
有时取决于我们的配置,某些配置保留在群集中。
问题,提到遇到的OP here - External etcd clean up
如果使用外部etcd,
kubeadm reset
将不会删除任何etcd数据。这意味着,如果您使用相同的etcd端点再次运行kubeadm init,您将看到先前集群的状态。
关于不可变字段:Service “kube-dns” is invalid: spec.clusterIP: Invalid value: “10.10.0.10”: field is immutable
。
在Kubernetes中,对某些字段进行了保护以防止可能会破坏集群工作的更改。
如果任何字段为immutable
,但我们必须对其进行更改,则必须删除该对象并再次添加。