KubeDNS x509:无法加载系统根目录并且没有提供根但卷曲工作

时间:2017-01-24 17:03:13

标签: kubernetes

我遇到了最后一个版本的kubernetes(1.5.1)的问题。我有一个安静的非常规设置,由5个Redhat Enterprise服务器组成。 3是节点,2是主节点。两个大师都在一个etd集群上,法兰绒也被添加到裸金属中。 我在kube-DNS容器中有这个循环日志:
class BaseModel { protected void setPrivateFields(Map<String, String> fieldsValuesMap) { for (Map.Entry<String, String> entry : fieldsValuesMap.entrySet()) { String fieldName = entry.getKey(); String fieldValue = entry.getValue(); try { Field field = this.getClass().getDeclaredField(fieldName); boolean access = field.isAccessible(); field.setAccessible(true); field.set(this, fieldValue); field.setAccessible(access); } catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) { e.printStackTrace(); } } } }

我对证书进行了大量测试。 Curl完美地使用相同的凭据。这一代人是在kubernetes的官方推荐下完成的。

这是我不同的配置文件(如果需要,只需要检查ip和主机名)。

KUBE-apiserver.yml

Failed to list *api.Endpoints: Get https://*.*.*.33:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided

KUBE-controlleur-manager.yml

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-apiserver",
    "namespace": "kube-system",
    "labels": {
      "component": "kube-apiserver",
      "tier": "control-plane"
    }
  },
  "spec": {
    "volumes": [
      {
        "name": "certs",
        "hostPath": {
          "path": "/etc/ssl/certs"
        }
      },
      {
        "name": "pki",
        "hostPath": {
          "path": "/etc/kubernetes"
        }
      }
    ],
    "containers": [
      {
        "name": "kube-apiserver",
        "image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.1",
        "command": [
          "/usr/local/bin/kube-apiserver",
          "--v=0",
          "--insecure-bind-address=127.0.0.1",
          "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
          "--service-cluster-ip-range=100.64.0.0/12",
          "--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
          "--client-ca-file=/etc/kubernetes/pki/ca.pem",
          "--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
          "--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
          "--secure-port=5443",
          "--allow-privileged",
          "--advertise-address=X.X.X.33",
          "--etcd-servers=http://X.X.X.33:2379,http://X.X.X.37:2379",
          "--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP"
        ],
        "resources": {
          "requests": {
            "cpu": "250m"
          }
        },
        "volumeMounts": [
          {
            "name": "certs",
            "mountPath": "/etc/ssl/certs"
          },
          {
            "name": "pki",
            "readOnly": true,
            "mountPath": "/etc/kubernetes/"
          }
        ],
        "livenessProbe": {
          "httpGet": {
            "path": "/healthz",
            "port": 8080,
            "host": "127.0.0.1"
          },
          "initialDelaySeconds": 15,
          "timeoutSeconds": 15
        }
      }
    ],
    "hostNetwork": true
  }
}

KUBE-scheduler.yml

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-controller-manager",
    "namespace": "kube-system",
    "labels": {
      "component": "kube-controller-manager",
      "tier": "control-plane"
    }
  },
  "spec": {
    "volumes": [
      {
        "name": "pki",
        "hostPath": {
          "path": "/etc/kubernetes"
        }
      }
    ],
    "containers": [
      {
        "name": "kube-controller-manager",
        "image": "gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1",
        "command": [
          "/usr/local/bin/kube-controller-manager",
          "--v=0",
          "--address=127.0.0.1",
          "--leader-elect=true",
          "--master=https://X.X.X.33",
          "--cluster-name= kubernetes",
          "--kubeconfig=/etc/kubernetes/kubeadminconfig",
          "--root-ca-file=/etc/kubernetes/pki/ca.pem",
          "--service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
          "--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem",
          "--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem"
        ],
        "resources": {
          "requests": {
            "cpu": "200m"
          }
        },
        "volumeMounts": [
          {
            "name": "pki",
            "readOnly": true,
            "mountPath": "/etc/kubernetes/"
          }
        ],
        "livenessProbe": {
          "httpGet": {
            "path": "/healthz",
            "port": 10252,
            "host": "127.0.0.1"
          },
          "initialDelaySeconds": 15,
          "timeoutSeconds": 15
        }
      }
    ],
    "hostNetwork": true
  }
}

haproxy.yml

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-scheduler",
    "namespace": "kube-system",
    "labels": {
      "component": "kube-scheduler",
      "tier": "control-plane"
    }
  },
  "spec": {
"volumes": [
      {
        "name": "pki",
        "hostPath": {
          "path": "/etc/kubernetes"
        }
      }
    ],
    "containers": [
      {
        "name": "kube-scheduler",
        "image": "gcr.io/google_containers/kube-scheduler-amd64:v1.5.1",
        "command": [
          "/usr/local/bin/kube-scheduler",
          "--v=0",
          "--address=127.0.0.1",
          "--leader-elect=true",
      "--kubeconfig=/etc/kubernetes/kubeadminconfig",
          "--master=https://X.X.X.33"
        ],
        "resources": {
          "requests": {
            "cpu": "100m"
          }
        },
       "volumeMounts": [
          {
            "name": "pki",
            "readOnly": true,
            "mountPath": "/etc/kubernetes/"
          }
        ],
        "livenessProbe": {
          "httpGet": {
            "path": "/healthz",
            "port": 10251,
            "host": "127.0.0.1"
          },
          "initialDelaySeconds": 15,
          "timeoutSeconds": 15
        }
      }
    ],
    "hostNetwork": true
  }
}

kubelet.service

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "haproxy",
    "namespace": "kube-system",
    "labels": {
      "component": "kube-apiserver",
      "tier": "control-plane"
    }
  },
  "spec": {
    "volumes": [
      {
        "name": "vol",
        "hostPath": {
          "path": "/etc/haproxy/haproxy.cfg"
        }
      }
    ],
    "containers": [
      {
        "name": "haproxy",
        "image": "docker.io/haproxy:1.7",
        "resources": {
          "requests": {
            "cpu": "250m"
          }
        },
        "volumeMounts": [
          {
            "name": "vol",
            "readOnly": true,
            "mountPath": "/usr/local/etc/haproxy/haproxy.cfg"
          }
        ]
      }
    ],
    "hostNetwork": true
  }
}

kubelet

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service 
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet 
EnvironmentFile=/etc/kubernetes/kubelet     ExecStart=/usr/bin/kubelet \
        $KUBELET_ADDRESS \
        $KUBELET_POD_INFRA_CONTAINER \
        $KUBELET_ARGS \
        $KUBE_LOGTOSTDERR \
        $KUBE_ALLOW_PRIV \
        $KUBELET_NETWORK_ARGS \
        $KUBELET_DNS_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

kubadminconfig

KUBELET_ADDRESS="--address=0.0.0.0 --port=10250"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeadminconfig --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests"
KUBE_LOGTOSTDERR="--logtostderr=true --v=9"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBELET_DNS_ARGS="--cluster-dns=100.64.0.10 --cluster-domain=cluster.local"

我已经在互联网上看到了大部分关于此问题的相关问题,所以我希望有人会提示调试这个。

0 个答案:

没有答案