在VPC中具有CIDR 172.20.0.0/16的AWS EKS集群,并安装了istio 1.0.2
和头盔:
helm upgrade -i istio install/kubernetes/helm/istio \
--namespace istio-system \
--set tracing.enabled=true \
--set grafana.enabled=true \
--set telemetry-gateway.grafanaEnabled=true \
--set telemetry-gateway.prometheusEnabled=true \
--set global.proxy.includeIPRanges="172.20.0.0/16" \
--set servicegraph.enabled=true \
--set galley.enabled=false
然后部署一些Pod进行测试:
apiVersion: v1
kind: Service
metadata:
name: service-one
labels:
app: service-one
spec:
ports:
- port: 80
targetPort: 8080
name: http
selector:
app: service-one
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-one
spec:
replicas: 1
template:
metadata:
labels:
app: service-one
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
labels:
app: service-two
spec:
ports:
- port: 80
targetPort: 8080
name: http-status
selector:
app: service-two
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-two
spec:
replicas: 1
template:
metadata:
labels:
app: service-two
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
并使用以下命令进行部署:
kubectl apply -f <(istioctl kube-inject -f app.yaml)
然后在服务一容器中,我正在请求服务二,并且在服务一的istio-proxy容器中没有关于传出请求的日志,但是如果我在未设置global.proxy.includeIPRanges
的情况下重新配置istio,它将按预期工作(但我需要此配置以允许多个外部连接)。如何调试正在发生的事情?
答案 0 :(得分:0)
不建议使用设置global.proxy.includeIPRanges
,它不起作用。 Git上有一个discussion。最近的新事物是pod的sidecar-injector Config-Map中的includeOutboundIpRanges
或traffic.sidecar.istio.io/includeOutboundIPRanges
的pod注释。注释看起来更容易。目前,官方文档尚不清楚。
您可以将注释添加到您的部署中:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
traffic.sidecar.istio.io/includeOutboundIPRanges: "172.20.0.0/16"
name: service-one
spec:
replicas: 1
template:
metadata:
labels:
app: service-one
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
与第二次部署相同。