Isito Ingress Controller虚拟服务返回503

时间:2019-05-07 15:30:00

标签: azure kubernetes kubernetes-ingress istio

我创建了具有以下版本的AKS集群。

Kubernetes version: 1.12.6
Istio version: 1.1.4
Cloud Provider: Azure

我还已经成功地将Istio安装为具有外部IP地址的Ingress网关。我还为部署服务的名称空间启用了istio-injection。而且我看到Sidecar注射正在成功进行。它正在显示。

NAME                                      READY   STATUS    RESTARTS   AGE
club-finder-deployment-7dcf4479f7-8jlpc   2/2     Running   0          11h
club-finder-deployment-7dcf4479f7-jzfv7   2/2     Running   0          11h

我的tls网关

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tls-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
      privateKey: /etc/istio/ingressgateway-certs/tls.key
    hosts:
    - "*"

注意:我正在使用自签名证书进行测试。

我已经在虚拟服务下申请了

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: club-finder-service-rules
  namespace: istio-system
spec:
  # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
  gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
    - tls-gateway
  hosts:
    - "*" # APIM Manager URL
  http:
  - match:
    - uri:
        prefix: /dev/clubfinder/service/clubs
    rewrite:
      uri: /v1/clubfinder/clubs/
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 8080
  - match:
    - uri:
        prefix: /dev/clubfinder/service/status
    rewrite:
      uri: /status
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 8080

现在,当我尝试使用类似Ingress的外部IP来测试我的服务时

curl -kv https://<external-ip-of-ingress>/dev/clubfinder/service/status

我遇到错误

* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe5e800d600)
> GET /dev/clubfinder/service/status HTTP/2
> Host: x.x.x.x --> Replacing IP intentionally
> User-Agent: curl/7.54.0
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503 
< date: Tue, 07 May 2019 05:15:01 GMT
< server: istio-envoy
< 
* Connection #0 to host x.x.x.x left intact

有人可以指出我这里有什么问题吗

2 个答案:

答案 0 :(得分:1)

我错误地定义了我的“ VirtualService” yaml。我没有使用默认的HTTP端口80,而是提到8080,这是我的应用程序侦听端口。以下yaml为我工作了

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: club-finder-service-rules
  namespace: istio-system
spec:
  # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
  gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
    - tls-gateway
  hosts:
    - "*" # APIM Manager URL
  http:
  - match:
    - uri:
        prefix: /dev/clubfinder/service/clubs
    rewrite:
      uri: /v1/clubfinder/clubs/
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 80
  - match:
    - uri:
        prefix: /dev/clubfinder/service/status
    rewrite:
      uri: /status
    route:
    - destination:
        host: club-finder.club-finder-service-dev.svc.cluster.local
        port:
          number: 80

答案 1 :(得分:1)

作为以后的参考,如果您遇到这样的问题,基本上有两个主要的故障排除步骤:

1)检查Envoy代理是否启动,并且其配置是否与Pilot同步

istioctl proxy-config

2)为您的Pod获取Envoy的侦听器,查看是否有任何监听的端口正在运行您的服务

istioctl proxy-config listener club-finder-deployment-7dcf4479f7-8jlpc

因此,在第2步的情况下,您会看到端口80没有侦听器,指出了根本原因。

此外,如果您查看日志记录,则可能会看到 UF(上游故障) UH(上游状态不正常)代码出现错误。这是完整的list错误标志。

要进行更深入的Envoy调试,请参阅此handbook