我正在尝试使用nginx-controller配置udp入口端口。但是我在nginx-controller的错误中不断得到以下内容:
$ kubectl -n kube-system logs -f nginx-ingress-controller-2391389042-xzmc7
2017/03/01 18:08:20 [error] 62#62: *8 no live upstreams while connecting to upstream, udp client: 192.168.0.20, server: 0.0.0.0:53, upstream: "udp-kube-system-kube-dns-53", bytes from/to client:1/0, bytes from/to upstream:0/0
正如您在nginx配置中看到的那样,服务器端点未正确映射到相应的端点ip。
配置
我使用以下内容配置我的环境:
# 1. install kubernetes with kubeadm
kubeadm init --pod-network-cidr 10.244.0.0/16
# 2. use flannel as virtual network backend
curl -sSL https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml | kubectl create -f -
# 3. install the nginx-controller from https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
# edit the controller to specify the host and enable the UDP ports (see bottom of the entry for reference)
# 4. create the ConfigMap for the udp ports
# udp example: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp
curl https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/udp/udp-configmap-example.yaml | kubectl create -f -
调试
Nginx配置:
$ kubectl -n kube-system exec nginx-ingress-controller-2391389042-xzmc7 -- cat /etc/nginx/nginx.conf| grep -i udp -C 10
upstream udp-kube-system-kube-dns-53 {
server 127.0.0.1:8181 down;
}
# TCP services
# UDP services
server {
listen 53 udp;
proxy_responses 1;
proxy_pass udp-kube-system-kube-dns-53;
}
}
kube-dns服务的描述:
$ kubectl -n kube-system describe svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: component=kube-dns
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
name=kube-dns
tier=node
Selector: name=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
Endpoints: 10.244.0.13:53
Port: dns-tcp 53/TCP
Endpoints: 10.244.0.13:53
Session Affinity: None
No events.
nginx控制器pod的描述:
$ kubectl -n kube-system describe po nginx-ingress-controller-2391389042-xzmc7
Name: nginx-ingress-controller-2391389042-xzmc7
Namespace: kube-system
Node: kubeworker-1/192.168.0.20
Start Time: Wed, 01 Mar 2017 19:07:26 +0100
Labels: k8s-app=nginx-ingress-controller
pod-template-hash=2391389042
Status: Running
IP: 192.168.0.20
Controllers: ReplicaSet/nginx-ingress-controller-2391389042
Containers:
nginx-ingress-controller:
Container ID: docker://65b3b9d2ce55932ca0940d561cec6b60dad26a317f2bcf54bbfa3a85e5908a65
Image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
Image ID: docker-pullable://gcr.io/google_containers/nginx-ingress-controller@sha256:977a68f887e1621fb30e80939b3a8f875cbb20c549af1e42d12f2fef272b8e9b
Ports: 80/TCP, 443/TCP, 53/UDP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--udp-services-configmap=$(POD_NAMESPACE)/udp-configmap-example
State: Running
Started: Wed, 01 Mar 2017 19:07:26 +0100
Ready: True
Restart Count: 0
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x8qk9 (ro)
Environment Variables:
POD_NAME: nginx-ingress-controller-2391389042-xzmc7 (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-x8qk9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x8qk9
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
25m 25m 3 {default-scheduler } Warning FailedScheduling pod (nginx-ingress-controller-2391389042-xzmc7) failed to fit in any node
fit failure summary on nodes : MatchNodeSelector (1), PodFitsHostPorts (1), PodToleratesNodeTaints (1)
25m 25m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-ingress-controller-2391389042-xzmc7 to kubeworker-1
25m 25m 1 {kubelet kubeworker-1} spec.containers{nginx-ingress-controller} Normal Pulled Container image "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2" already present on machine
25m 25m 1 {kubelet kubeworker-1} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 65b3b9d2ce55; Security:[seccomp=unconfined]
25m 25m 1 {kubelet kubeworker-1} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 65b3b9d2ce55
修改后的nginx-ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
hostNetwork: true
nodeSelector:
kubernetes.io/hostname: kubeworker-1
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 53
hostPort: 53
protocol: UDP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --udp-services-configmap=$(POD_NAMESPACE)/udp-configmap-example
答案 0 :(得分:0)
问题仅存在于版本0.9.0-beta 1和2.回滚到0.8.3解决了这个问题。
根据https://github.com/kubernetes/ingress/issues/199,正在努力解决0.9.0上的问题。