gke:入口未转发到NodePort服务:在此服务器上未找到<code> / healthz </code>

时间:2019-11-18 23:23:14

标签: kubernetes google-kubernetes-engine kubernetes-ingress

我有太多的LoadBalancer服务占用了太多的外部IP,我想切换到使用Ingress控制器。

我做了tutorial,并且在Google提供的广告连播中一切正常。

但是,通过我的Pod,我可以使用NodePort服务...

 ?   >curl http://35.223.89.81:32607/healthz
OK ?   >

...但是对Ingress Controller的调用始终失败...

 ?   >curl http://35.241.21.71:80/healthz
<!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 404 (Not Found)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>404.</b> <ins>That’s an error.</ins>
  <p>The requested URL <code>/healthz</code> was not found on this server.  <ins>That’s all we know.</ins>

这是我使用的k8s版本:

 ?   >gcloud container clusters list
NAME              LOCATION       MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION     NUM_NODES  STATUS
monza-predictors  us-central1-a  1.13.11-gke.14  35.193.247.210  n1-standard-1  1.13.11-gke.9 *  2          RUNNING

入口的YAML

 ?   >cat fanout-ingress-v2.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fanout-ingress
spec:
  rules:
  - http:
      paths:
      - path: /healthz
        backend:
          serviceName: predictor-classification-seatbelt-driver-service-node-port
          servicePort: 4444
      - path: /seatbelt-driver
        backend:
          serviceName: predictor-classification-seatbelt-driver-service-node-port
          servicePort: 4444

描述入口

 ?   >kubectl describe ing fanout-ingress 
Name:             fanout-ingress
Namespace:        default
Address:          35.241.21.71
Default backend:  default-http-backend:80 (10.40.2.10:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /healthz           predictor-classification-seatbelt-driver-service-node-port:4444 (<none>)
        /seatbelt-driver   predictor-classification-seatbelt-driver-service-node-port:4444 (<none>)
Annotations:
  ingress.kubernetes.io/url-map:                     k8s-um-default-fanout-ingress--62f4c45447b62142
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"fanout-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444},"path":"/healthz"},{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444},"path":"/seatbelt-driver"}]}}]}}

  ingress.kubernetes.io/backends:         {"k8s-be-31413--62f4c45447b62142":"HEALTHY","k8s-be-32607--62f4c45447b62142":"UNHEALTHY"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-fanout-ingress--62f4c45447b62142
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-fanout-ingress--62f4c45447b62142
Events:
  Type    Reason  Age   From                     Message
  ----    ------  ----  ----                     -------
  Normal  ADD     21m   loadbalancer-controller  default/fanout-ingress
  Normal  CREATE  19m   loadbalancer-controller  ip: 35.241.21.71

我注意到2个后端中的1个不健康。

NodePort服务的YAML:

 ?   >cat service-node-port-classification-predictor.yaml
apiVersion: v1
kind: Service
metadata:
  name: predictor-classification-seatbelt-driver-service-node-port
  namespace: default
spec:
  ports:
  - port: 4444
    protocol: TCP
    targetPort: 4444
  selector:
    app: predictor-classification-seatbelt-driver
  type: NodePort

描述NodePort服务

 ?   >kubectl describe svc predictor-classification-seatbelt-driver-service-node-port
Name:                     predictor-classification-seatbelt-driver-service-node-port
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"predictor-classification-seatbelt-driver-service-node-port","name...
Selector:                 app=predictor-classification-seatbelt-driver
Type:                     NodePort
IP:                       10.43.243.69
Port:                     <unset>  4444/TCP
TargetPort:               4444/TCP
NodePort:                 <unset>  32607/TCP
Endpoints:                10.40.2.16:4444
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

用于部署的YAML

 ?   >cat deployment-classification-predictor-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: predictor-classification-seatbelt-driver
  labels:
    app: predictor-classification-seatbelt-driver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: predictor-classification-seatbelt-driver
  template:
    metadata:
      labels:
        app: predictor-classification-seatbelt-driver
    spec:
      containers:
      - name: predictor-classification-seatbelt-driver
        image: gcr.io/annotator-1286/classification-predictor
        command: ["/app/server.sh"]
        args: ["4444", "https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb", "https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv"]
        ports:
        - containerPort: 4444
        livenessProbe:
          httpGet:
            path: /healthz
            port: 4444
          initialDelaySeconds: 120

描述部署

 ?   >kubectl describe deploy predictor-classification-seatbelt-driver
Name:                   predictor-classification-seatbelt-driver
Namespace:              default
CreationTimestamp:      Mon, 18 Nov 2019 12:17:13 -0800
Labels:                 app=predictor-classification-seatbelt-driver
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"predictor-classification-seatbelt-driver"},"name...
Selector:               app=predictor-classification-seatbelt-driver
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=predictor-classification-seatbelt-driver
  Containers:
   predictor-classification-seatbelt-driver:
    Image:      gcr.io/annotator-1286/classification-predictor
    Port:       4444/TCP
    Host Port:  0/TCP
    Command:
      /app/server.sh
    Args:
      4444
      https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb
      https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv
    Liveness:     http-get http://:4444/healthz delay=120s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   predictor-classification-seatbelt-driver-85bc679444 (1/1 replicas created)
Events:          <none>

豆荚的描述

 ?   >kubectl describe po predictor-classification-seatbelt-driver-85bc679444-lcb7v
Name:               predictor-classification-seatbelt-driver-85bc679444-lcb7v
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-monza-predictors-default-pool-268f57e3-1bs6/10.128.0.65
Start Time:         Mon, 18 Nov 2019 12:17:13 -0800
Labels:             app=predictor-classification-seatbelt-driver
                    pod-template-hash=85bc679444
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container predictor-classification-seatbelt-driver
Status:             Running
IP:                 10.40.2.16
Controlled By:      ReplicaSet/predictor-classification-seatbelt-driver-85bc679444
Containers:
  predictor-classification-seatbelt-driver:
    Container ID:  docker://90ce1466b852760db92bc66698295a2ae2963f19d26111e5be03d588dc83a712
    Image:         gcr.io/annotator-1286/classification-predictor
    Image ID:      docker-pullable://gcr.io/annotator-1286/classification-predictor@sha256:63690593d710182110e51fbd620d6944241c36dd79bce7b08b2823677ec7b929
    Port:          4444/TCP
    Host Port:     0/TCP
    Command:
      /app/server.sh
    Args:
      4444
      https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb
      https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv
    State:          Running
      Started:      Mon, 18 Nov 2019 12:17:15 -0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://:4444/healthz delay=120s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8q95m (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-8q95m:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8q95m
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

更新:使用Single Service Ingress不能解决问题

 ?   >cat fanout-ingress-v3.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fanout-ingress
spec:
  backend:
    serviceName: predictor-classification-seatbelt-driver-service-node-port
    servicePort: 4444
 ?   >kubectl apply -f fanout-ingress-v3.yaml 
ingress.extensions/fanout-ingress created


 ?   >kubectl describe ing fanout-ingress 
Name:             fanout-ingress
Namespace:        default
Address:          35.244.250.224
Default backend:  predictor-classification-seatbelt-driver-service-node-port:4444 (10.40.2.16:4444)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     *     predictor-classification-seatbelt-driver-service-node-port:4444 (10.40.2.16:4444)
Annotations:
  ingress.kubernetes.io/url-map:                     k8s-um-default-fanout-ingress--62f4c45447b62142
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"fanout-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444}}}

  ingress.kubernetes.io/backends:         {"k8s-be-32607--62f4c45447b62142":"Unknown"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-fanout-ingress--62f4c45447b62142
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-fanout-ingress--62f4c45447b62142
Events:
  Type    Reason  Age    From                     Message
  ----    ------  ----   ----                     -------
  Normal  ADD     3m31s  loadbalancer-controller  default/fanout-ingress
  Normal  CREATE  2m56s  loadbalancer-controller  ip: 35.244.250.224



?   >curl 35.244.250.224/healthz
<!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 404 (Not Found)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>404.</b> <ins>That’s an error.</ins>
  <p>The requested URL <code>/healthz</code> was not found on this server.  <ins>That’s all we know.</ins>

1 个答案:

答案 0 :(得分:1)

向您的Deployment对象添加 readinessProbe

    readinessProbe:
      httpGet:
        path: /healthz
        port: 4444
      initialDelaySeconds: 120

IngressController可能会等待将流量路由到服务,直到服务中的Pod准备就绪以处理来自Ingress代理的请求。