Istio Ingress导致“没有健康的上游”

时间:2017-12-05 23:31:06

标签: kubernetes istio

我正在使用部署面向外部的服务,该服务暴露在nodeport后面,然后是一个istio ingress。部署使用手动侧车注入。一旦部署,nodeport和ingress正在运行,我就可以向istio ingress发出请求。

由于某些未知原因,请求不会路由到我的部署,而是显示文本“no healthy upstream”。为什么会这样,是什么导致它?

我可以在http响应中看到状态代码是503(服务不可用),服务器是“envoy”。部署正在运行,因为我可以将端口映射到它,一切都按预期工作。

4 个答案:

答案 0 :(得分:1)

尽管这是由于Istio设置不正确导致的路由问题而引起的一般性错误,但我将为遇到相同问题的任何人提供常规解决方案/建议。

在我的情况下,问题是由于路由规则配置不正确,Kubernetes本机服务正在运行,但是Istio路由规则配置不正确,因此Istio无法从入口路由到服务。

答案 1 :(得分:1)

以防万一,就像我一样,您也会感到好奇...即使在我的情况下也很清楚出现错误的情况...

错误原因: 我有两个版本的同一服务(v1和v2),以及一个使用权重配置了流量路由目标的Istio VirtualService。然后,95%进入v1,5%进入v2。当然,由于我尚未部署v1,因此错误“ 503-上游没有正常运行”显示了95%的请求。

好的,即使如此,我也知道问题所在以及如何解决(只需部署v1),我想知道...但是,我如何才能获得有关此错误的更多信息?我如何对该错误进行更深入的分析以找出正在发生的情况?

这是使用Istio(配置为istioctl)的配置命令行实用工具进行调查的一种方式:

# 1) Check the proxies status -->
  $ istioctl proxy-status
# Result -->
  NAME                                                   CDS        LDS        EDS        RDS          PILOT                       VERSION
  ...
  teachstore-course-v1-74f965bd84-8lmnf.development      SYNCED     SYNCED     SYNCED     SYNCED       istiod-86798869b8-bqw7c     1.5.0
  ...
  ...

# 2) Get the name outbound from JSON result using the proxy (service with the problem) -->
  $ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-student.development.svc.cluster.local -o json
# 2) If you have jq install locally (only what we need, already extracted) -->
  $ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-course.development.svc.cluster.local -o json | jq -r .[].name
# Result -->
  outbound|80||teachstore-course.development.svc.cluster.local
  inbound|80|9180-tcp|teachstore-course.development.svc.cluster.local
  outbound|80|v1|teachstore-course.development.svc.cluster.local
  outbound|80|v2|teachstore-course.development.svc.cluster.local

# 3) Check the endpoints of "outbound|80|v2|teachstore-course..." using v1 proxy -->
  $ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v2|teachstore-course.development.svc.cluster.local"
# Result (the v2, 5% of the traffic route is ok, there are healthy targets) -->
  ENDPOINT             STATUS      OUTLIER CHECK     CLUSTER
  172.17.0.28:9180     HEALTHY     OK                outbound|80|v2|teachstore-course.development.svc.cluster.local
  172.17.0.29:9180     HEALTHY     OK                outbound|80|v2|teachstore-course.development.svc.cluster.local

# 4) However, for the v1 version "outbound|80|v1|teachstore-course..." -->
$ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v1|teachstore-course.development.svc.cluster.local"
  ENDPOINT             STATUS      OUTLIER CHECK     CLUSTER
# Nothing! Emtpy, no Pods, that's explain the "no healthy upstream" 95% of time.

答案 2 :(得分:0)

当我的广告连播处于ContainerCreating状态时,我遇到了问题。因此,它导致503错误。就像@pegaldon一样,它也可能由于路由配置错误或用户未创建网关而发生。

答案 3 :(得分:0)

删除destinationrules.networking.istio.io 并重新创建virtualservice.networking.istio.io

[root@10-20-10-110 ~]# curl http://dprovider.example.com:31400/dw/provider/beat
no healthy upstream[root@10-20-10-110 ~]# 
[root@10-20-10-110 ~]# curl http://10.210.11.221:10100/dw/provider/beat
"该服务节点  10.210.11.221  心跳正常!"[root@10-20-10-110 ~]# 
[root@10-20-10-110 ~]# 
[root@10-20-10-110 ~]# cat /home/example_service_yaml/vs/dw-provider-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: dw-provider-service
  namespace: example
spec:
  hosts:
  - "dprovider.example.com"
  gateways:
  - example-gateway
  http:
  - route:
    - destination:
        host: dw-provider-service 
        port:
          number: 10100
        subset: "v1-0-0"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: dw-provider-service
  namespace: example
spec:
  host: dw-provider-service
  subsets:
  - name: "v1-0-0"
    labels:
      version: 1.0.0

[root@10-20-10-110 ~]# vi /home/example_service_yaml/vs/dw-provider-service.yaml 
[root@10-20-10-110 ~]# kubectl -n example get vs -o wide | grep dw                       
dw-collection-service    [example-gateway]   [dw.collection.example.com]                       72d
dw-platform-service      [example-gateway]   [dplatform.example.com]                           81d
dw-provider-service      [example-gateway]   [dprovider.example.com]                           21m
dw-sync-service          [example-gateway]   [dw-sync-service dsync.example.com]               34d
[root@10-20-10-110 ~]# kubectl -n example delete vs dw-provider-service 
virtualservice.networking.istio.io "dw-provider-service" deleted
[root@10-20-10-110 ~]# kubectl -n example delete d dw-provider-service   
daemonsets.apps                       deniers.config.istio.io               deployments.extensions                dogstatsds.config.istio.io            
daemonsets.extensions                 deployments.apps                      destinationrules.networking.istio.io  
[root@10-20-10-110 ~]# kubectl -n example delete destinationrules.networking.istio.io dw-provider-service 
destinationrule.networking.istio.io "dw-provider-service" deleted
[root@10-20-10-110 ~]# kubectl apply -f /home/example_service_yaml/vs/dw-provider-service.yaml 
virtualservice.networking.istio.io/dw-provider-service created
[root@10-20-10-110 ~]# curl http://dprovider.example.com:31400/dw/provider/beat
"该服务节点  10.210.11.221  心跳正常!"[root@10-20-10-110 ~]# 
[root@10-20-10-110 ~]#