我与从与Istio一起部署的Pod到Pod的通信有问题吗?我实际上需要make Hazelcast discovery working with Istio来使用它,但在这里我将尝试对此问题进行概括。
让我们在Kubernetes上部署一个示例hello world服务。该服务在端口8000上回复HTTP请求。
$ kubectl create deployment nginx --image=crccheck/hello-world
创建的Pod分配了一个内部IP:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-deployment-84d876dfd-s6r5w 1/1 Running 0 8m 10.20.3.32 gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d <none>
在工作curl.yaml
中,我们可以直接使用Pod IP。
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: byrnedo/alpine-curl
command: ["curl", "10.20.3.32:8000"]
restartPolicy: Never
backoffLimit: 4
在没有Istio的情况下运行作业会很好。
$ kubectl apply -f curl.yaml
$ kubectl logs pod/curl-pptlm
...
Hello World
...
但是,当我尝试对Istio进行相同操作时,它不起作用。 HTTP请求被Envoy阻止。
$ kubectl apply -f <(istioctl kube-inject -f curl.yaml)
$ kubectl logs pod/curl-s2bj6 curl
...
curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused
我玩过服务条目,MESH_INTERNAL和MESH_EXTERNAL,但是没有成功。如何绕过Envoy并直接致电Pod?
编辑:istioctl kube-inject -f curl.yaml
的输出。
$ istioctl kube-inject -f curl.yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: curl
spec:
backoffLimit: 4
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
spec:
containers:
- command:
- curl
- 10.16.2.34:8000/
image: byrnedo/alpine-curl
name: curl
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- curl.default
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- ""
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:1.1.1
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
initialDelaySeconds: 1
periodSeconds: 2
resources:
limits:
cpu: "2"
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- ""
- -d
- "15020"
image: docker.io/istio/proxy_init:1.1.1
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Never
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
答案 0 :(得分:2)
启动带有istio侧车的吊舱时,会发生以下事情
一个初始化容器会更改iptables规则,以便将所有传出的tcp通信都路由到端口15001上的sidecar容器(istio-proxy)。
容器的容器并行启动(curl
和istio-proxy
)
如果您的curl容器是在istio-proxy
监听端口15001之前执行的,则会收到错误消息。
我用sleep命令启动了这个容器,将exec-d放入容器中,然后卷曲工作了。
$ kubectl apply -f <(istioctl kube-inject -f curl-pod.yaml)
$ k exec -it -n noistio curl -c curl bash
[root@curl /]# curl 172.16.249.198:8000
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
[root@curl /]#
curl-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: centos
command: ["sleep", "3600"]
答案 1 :(得分:0)
确保已配置入口“网关”,然后需要配置“ VirtualService”。请参阅下面的链接以获取简单示例。
https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway
在将网关与虚拟服务一起部署后,您应该能够从群集的外部通过外部IP“卷曲”您的服务。
但是,如果您要检查来自 INSIDE 群集的流量,则需要使用istio的镜像API将服务(pod)从一个Pod镜像到另一个Pod,然后使用命令( kubectl apply -f curl.yaml)查看流量。
有关镜像示例,请参见下面的链接:
https://istio.io/docs/tasks/traffic-management/mirroring/
希望这会有所帮助