K8中某个服务内的Envoy Pod到Pod通信

时间:2019-01-30 09:21:30

标签: spring-boot kubernetes load-balancing istio envoyproxy

在配置Envoy时,是否可以向Kubernetes中属于同一服务的另一个K8 Pod发送http Rest请求?

重要:我还有一个问题here,该问题指示我向Envoy特定标签提问。

E。 G。 服务名称= UserService,2个Pod(副本= 2)

Pod 1 --> Pod 2 //using pod ip not load balanced hostname 
Pod 2 --> Pod 1

连接已超过“休息GET 1.2.3.4:7079/user/1

主机+端口的值取自kubectl get ep

两个Pod IP的工作均在Pod之外成功完成,但是当我对Pod进行kubectl exec -it并通过CURL发出请求时,它将返回未为端点找到的404。

,我想知道是否可以向同一服务中的另一个K8 Pod发出请求? 回答:这绝对有可能

Q 为什么我可以成功获得ping 1.2.3.4却无法使用Rest API的信息?

Q 配置Envoy时是否可以直接从另一个Pod请求Pod IP?

因为我是K8的完整初学者,所以请让我知道需要哪些配置文件或需要进行输出。谢谢。

以下是我的配置文件

 #values.yml
replicaCount: 1

 image:
  repository: "docker.hosted/app"
  tag: "0.1.0"
  pullPolicy: Always
  pullSecret: "a_secret"

service:
 name: http
 type: NodePort
 externalPort: 7079
 internalPort: 7079

ingress:
 enabled: false

deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "app.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:

            - name: MY_POD_IP
              valueFrom:
               fieldRef:
                fieldPath: status.podIP
            - name: MY_POD_PORT
              value: "{{ .Values.service.internalPort }}"
          ports:
            - containerPort: {{ .Values.service.internalPort }}
          livenessProbe:
            httpGet:
              path: /actuator/alive
              port: {{ .Values.service.internalPort }}
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /actuator/ready
              port: {{ .Values.service.internalPort }}
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      imagePullSecrets:
        - name: {{ .Values.image.pullSecret }

service.yml

kind: Service
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.externalPort }}
      targetPort: {{ .Values.service.internalPort }}
      protocol: TCP
      name: {{ .Values.service.name }}
  selector:
    app: {{ template "app.name" . }}
    release: {{ .Release.Name }}

从主人处执行

executed from k8 master

从同一MicroService的pod内执行

executed from inside a pod of the same MicroService

编辑2: “ kubectl get -o yaml部署”的输出

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2019-01-29T20:34:36Z
  generation: 1
  labels:
    app: msg-messaging-room
    chart: msg-messaging-room-0.0.22
    heritage: Tiller
    release: msg-messaging-room
  name: msg-messaging-room
  namespace: default
  resourceVersion: "25447023"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/msg-messaging-room
  uid: 4b283304-2405-11e9-abb9-000c29c7d15c
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: msg-messaging-room
      release: msg-messaging-room
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: msg-messaging-room
        release: msg-messaging-room
    spec:
      containers:
      - env:
        - name: KAFKA_HOST
          value: confluent-kafka-cp-kafka-headless
        - name: KAFKA_PORT
          value: "9092"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: MY_POD_PORT
          value: "7079"
        image: msg-messaging-room:0.0.22
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/alive
            port: 7079
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: msg-messaging-room
        ports:
        - containerPort: 7079
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/ready
            port: 7079
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2019-01-29T20:35:43Z
    lastUpdateTime: 2019-01-29T20:35:43Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2019-01-29T20:34:36Z
    lastUpdateTime: 2019-01-29T20:36:01Z
    message: ReplicaSet "msg-messaging-room-6f49b5df59" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

“ kubectl get -o yaml svc $ the_service”的输出

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-01-29T20:34:36Z
  labels:
    app: msg-messaging-room
    chart: msg-messaging-room-0.0.22
    heritage: Tiller
    release: msg-messaging-room
  name: msg-messaging-room
  namespace: default
  resourceVersion: "25446807"
  selfLink: /api/v1/namespaces/default/services/msg-messaging-room
  uid: 4b24bd84-2405-11e9-abb9-000c29c7d15c
spec:
  clusterIP: 1.2.3.172.201
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31849
    port: 7079
    protocol: TCP
    targetPort: 7079
  selector:
    app: msg-messaging-room
    release: msg-messaging-room
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

2 个答案:

答案 0 :(得分:1)

我在另一个问题上发布的是,我在安装服务之前禁用了Istio注入,然后在安装服务后重新启用了它,现在一切正常,所以对我有用的命令是:

enter image description here

答案 1 :(得分:0)

对于Pod to Pod部分:

添加另一个服务(无头)将允许您通过curl访问另一个Pod,同时仍启用Istio。

例如添加

@app.route('/play/<int:x>/<string:color>')
def second(x, color):
    return render_template('index.html', times=x, color=color)

作为无头服务,将Pod提供为端点,而不提供其自己的clusterIP。

如果您不需要负载平衡,则可以使用无头服务,但如果您希望两者兼用,则可以将第一个服务用于外部流量,将无头服务用于Pod到Pod通信。