所以我在运行HypriotOS的3个覆盆子pis上运行了一个3节点kubernetes集群。自从启动和加入节点以来,我没有做任何事情,除了安装编织。但是当我输入kubectl cluster-info
时,我会看到两个选项,
Kubernetes master is running at https://192.168.0.35:6443
KubeDNS is running at https://192.168.0.35:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
当我卷曲第二个网址时,我得到以下回复:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kube-dns\"",
"reason": "ServiceUnavailable",
"code": 503
}
以下是有关群集状态的更多信息。
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-node01 1/1 Running 0 13d
kube-system kube-apiserver-node01 1/1 Running 21 13d
kube-system kube-controller-manager-node01 1/1 Running 5 13d
kube-system kube-dns-2459497834-v1g4n 3/3 Running 43 13d
kube-system kube-proxy-1hplm 1/1 Running 0 5h
kube-system kube-proxy-6bzvr 1/1 Running 0 13d
kube-system kube-proxy-cmp3q 1/1 Running 0 6d
kube-system kube-scheduler-node01 1/1 Running 8 13d
kube-system weave-net-5cq9c 2/2 Running 0 6d
kube-system weave-net-ff5sz 2/2 Running 4 13d
kube-system weave-net-z3nq3 2/2 Running 0 5h
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.96.0.1 <none> 443/TCP 13d
kube-system kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 13d
$ kubectl --namespace kube-system describe pod kube-dns-2459497834-v1g4n
Name: kube-dns-2459497834-v1g4n
Namespace: kube-system
Node: node01/192.168.0.35
Start Time: Wed, 23 Aug 2017 20:34:56 +0000
Labels: k8s-app=kube-dns
pod-template-hash=2459497834
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-2459497834","uid":"37640de4-8841-11e7-ad32-b827eb0a...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 10.32.0.2
Created By: ReplicaSet/kube-dns-2459497834
Controlled By: ReplicaSet/kube-dns-2459497834
Containers:
kubedns:
Container ID: docker://9a781f1fea4c947a9115c551e65c232d5fe0aa2045e27e79eae4b057b68e4914
Image: gcr.io/google_containers/k8s-dns-kube-dns-arm:1.14.4
Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-arm@sha256:ac677e54bef9717220a0ba2275ba706111755b2906de689d71ac44bfe425946d
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-dir=/kube-dns-config
--v=2
State: Running
Started: Tue, 29 Aug 2017 19:09:10 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Tue, 29 Aug 2017 17:07:49 +0000
Finished: Tue, 29 Aug 2017 19:09:08 +0000
Ready: True
Restart Count: 18
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rf19g (ro)
dnsmasq:
Container ID: docker://f8e17df36310bc3423a74e3f6989204abac9e83d4a8366561e54259418030a50
Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm:1.14.4
Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm@sha256:a7469e91b4b20f31036448a61c52e208833c7cb283faeb4ea51b9fa22e18eb69
Ports: 53/UDP, 53/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
--
-k
--cache-size=1000
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Running
Started: Tue, 29 Aug 2017 19:09:52 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
$ kubectl --namespace kube-system describe svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
Endpoints: 10.32.0.2:53
Port: dns-tcp 53/TCP
Endpoints: 10.32.0.2:53
Session Affinity: None
Events: <none>
我无法弄清楚这里发生了什么,因为除了按照here的说明我没有做任何其他事情。这个问题一直存在于kubernetes的多个版本以及多个网络覆盖之间,包括法兰绒。所以它开始让我认为这是rpis本身的一些问题。
答案 0 :(得分:0)
更新:以下假设不是此错误消息的完整说明。 proxy API州:
创建连接代理
将GET请求连接到Pod的代理
GET / api / v1 / namespaces / {namespace} / pods / {name} / proxy
现在的问题是connect GET requests to proxy of Pod
的确切含义,但我坚信这意味着将GET请求转发给pod。这意味着以下假设是正确的。
我检查了其他不是为HTTP流量设计的服务,他们都收到了这条错误消息,而为HTTP流量设计的服务运行良好(例如/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
)。
我认为这是正常行为 - 无需担心。如果您查看群集中的kube-dns
服务对象,您可以看到它只为内部端点提供端口53,这是标准DNS端口 - 所以我假设kube-dns
服务只响应正确的DNS查询。使用curl,您尝试在此服务上发出正常的GET请求,这会导致错误响应。
从您给定的群集信息判断,您所有的pod都看起来很好,我敢打赌您的服务端点也会正确显示。您可以通过kubectl get ep kube-dns --namespace=kube-system
检查,这应该产生类似的结果:
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 100.101.26.65:53,100.96.150.198:53,100.101.26.65:53 + 1 more... 20d
在我的群集上(k8s 1.7.3
)卷曲GET到/api/v1/namespaces/kube-system/services/kube-dns/proxy
也会导致您提到的错误消息,但我从未遇到过DNS问题,因此我希望我对此问题的假设是正确的。