istio上游连接错误或在标头之前断开连接/重置。重置原因:连接终止

时间:2019-08-23 20:06:39

标签: kubernetes istio

我正在尝试遵循Istio BookInfo example进行Kubernetes。但是,我没有在default名称空间中安装资源,而是使用了名为qa的名称空间。在第5步,我遇到了问题。当我尝试卷曲产品页面时,得到以下响应:

upstream connect error or disconnect/reset before headers. reset reason: connection termination

但是,如果我遵循相同的示例,但使用default名称空间,则会从产品页面获得成功的响应。

有什么想法为什么会破坏我的qa名称空间?

Istio版本:

client version: 1.2.4
citadel version: 1.2.2
egressgateway version: 1.2.2
galley version: 1.2.2
ingressgateway version: 1.2.2
pilot version: 1.2.2
policy version: 1.2.2
sidecar-injector version: 1.2.2
telemetry version: 1.2.2

Kubernetes版本(在AKS中运行):

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:39:30Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

1 个答案:

答案 0 :(得分:0)

我将建议执行以下步骤以调试报告的问题:

  1. 检查sidecar是否已注入qa命名空间:

$ kubectl get namespace -L istio-injection| grep qa

qa                Active   83m   enabled
  1. 验证k8s Bookinfo 应用程序资源是否正确分布并位于qa名称空间中:

$ kubectl get all -n qa

NAME                                  READY   STATUS    RESTARTS   AGE
pod/details-v1-74f858558f-vh97g       2/2     Running   0          29m
pod/productpage-v1-8554d58bff-5tpbl   2/2     Running   0          29m
pod/ratings-v1-7855f5bcb9-hhlds       2/2     Running   0          29m
pod/reviews-v1-59fd8b965b-w9lk5       2/2     Running   0          29m
pod/reviews-v2-d6cfdb7d6-hsjqq        2/2     Running   0          29m
pod/reviews-v3-75699b5cfb-vl7t9       2/2     Running   0          29m


NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/details       ClusterIP   IP_ADDR          <none>        9080/TCP   29m
service/productpage   ClusterIP   IP_ADDR          <none>        9080/TCP   29m
service/ratings       ClusterIP   IP_ADDR          <none>        9080/TCP   29m
service/reviews       ClusterIP   IP_ADDR          <none>        9080/TCP   29m


NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/details-v1       1/1     1            1           29m
deployment.apps/productpage-v1   1/1     1            1           29m
deployment.apps/ratings-v1       1/1     1            1           29m
deployment.apps/reviews-v1       1/1     1            1           29m
deployment.apps/reviews-v2       1/1     1            1           29m
deployment.apps/reviews-v3       1/1     1            1           29m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/details-v1-74f858558f       1         1         1       29m
replicaset.apps/productpage-v1-8554d58bff   1         1         1       29m
replicaset.apps/ratings-v1-7855f5bcb9       1         1         1       29m
replicaset.apps/reviews-v1-59fd8b965b       1         1         1       29m
replicaset.apps/reviews-v2-d6cfdb7d6        1         1         1       29m
replicaset.apps/reviews-v3-75699b5cfb       1         1         1       29m

$ kubectl get sa -n qa

NAME                   SECRETS   AGE
bookinfo-details       1         36m
bookinfo-productpage   1         36m
bookinfo-ratings       1         36m
bookinfo-reviews       1         36m
default                1         97m
  1. 在特定的Pod容器中检查Istio Envoy,因此您可以提取有关代理状态和流量路由信息的一些基本数据:

kubectl logs $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}' -n qa) -c istio-proxy -n qa

我建议您阅读Istio网络流量管理troubleshooting文档章节,以获取更多见解。