删除并重新创建kube-controller-manager时出错

时间:2019-11-29 12:21:28

标签: kubernetes

我一直在弄乱kube-controller-manager(我先删除然后尝试重新创建),但是之后,当我将kube-controller-manager的日志作为种子时,我只会看到以下消息:

E1129 12:11:57.829927       1 node_lifecycle_controller.go:952] Error updating node ip-10-25-12-80.eu-west-1.compute.internal: Operation cannot be fulfilled on nodes "ip-10-25-12-80.eu-west-1.compute.internal": the object has been modified; please apply your changes to the latest version and try again
E1129 12:12:02.866317       1 node_lifecycle_controller.go:952] Error updating node ip-10-25-12-71.eu-west-1.compute.internal: Operation cannot be fulfilled on nodes "ip-10-25-12-71.eu-west-1.compute.internal": the object has been modified; please apply your changes to the latest version and try again
E1129 12:12:12.901763       1 node_lifecycle_controller.go:952] Error updating node ip-10-25-5-38.eu-west-1.compute.internal: Operation cannot be fulfilled on nodes "ip-10-25-5-38.eu-west-1.compute.internal": the object has been modified; please apply your changes to the latest version and try again
E1129 12:12:12.936580       1 node_lifecycle_controller.go:952] Error updating node ip-10-25-12-39.eu-west-1.compute.internal: Operation cannot be fulfilled on nodes "ip-10-25-12-39.eu-west-1.compute.internal": the object has been modified; please apply your changes to the latest version and try again

我正在阅读的kube-controller-manager容器状态为Pending,看起来我创建的容器的resourceVersion与版本寄存器不匹配,但我不知道该怎么做要解决此问题。

这是.yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  labels:
    app.kubernetes.io/name: control-plane
    app.kubernetes.io/component: kube-controller-manager
    app.kubernetes.io/part-of: static-pods
  name: kube-controller-manager-ip-10-25-5-38.eu-west-1.compute.internal
spec:
  hostNetwork: true
  nodeName: ip-10-25-5-38.eu-west-1.compute.internal
  priorityClassName: system-cluster-critical
  dnsPolicy: Default
  containers:
  - name: kube-controller-manager
    image: k8s.gcr.io/hyperkube:v1.15.0
    imagePullPolicy: IfNotPresent
    command:
    - ./hyperkube
    - kube-controller-manager
    - --cloud-provider=aws
    - --bind-address=127.0.0.1
    - --allocate-node-cidrs=true
    - --configure-cloud-routes=false
    - --cluster-name=development
    - --cluster-cidr=10.2.0.0/16
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --use-service-account-credentials=true
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 200m
    volumeMounts:
    - name: k8s-certs
      mountPath: /etc/kubernetes/pki
      readOnly: true
    - name: ca-certs
      mountPath: /etc/ssl/certs
      readOnly: true
    - name: kubeconfig
      mountPath: /etc/kubernetes/controller-manager.conf
      readOnly: true
    - name: etc-pki
      mountPath: /etc/pki
      readOnly: true
  volumes:
  - name: kubeconfig
    hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
  - name: etc-pki
    hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
  - name: k8s-certs
    hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
  - name: ca-certs
    hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate

0 个答案:

没有答案