ClusterIP服务是否在副本pod之间分发请求?

时间:2018-02-14 14:10:16

标签: kubernetes

您是否知道ClusterIP服务是否在目标部署副本之间分配工作负载?

我有一个后端的5个副本,有一个ClusterIP服务选择它们。我还有另外5个nginx pod副本指向这个后端部署。但是当我运行一个沉重的请求时,后端会停止响应其他请求,直到它完成重的请求。

更新

这是我的配置:

  

注意:我已经更换了一些与公司相关的信息。

内容提供商部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webapp
        tier: frontend
    spec:
      containers:
      - name:  python-gunicorn
        image:  <my-user>/webapp:1.1.2
        command: ["/env/bin/gunicorn", "--bind", "0.0.0.0:8000", "main:app", "--chdir", "/deploy/app", "--error-logfile", "/var/log/gunicorn/error.log", "--timeout", "7200"]
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.25"
          limits:
            # memory: "128Mi"
            cpu: "0.4"
        ports:
        - containerPort: 8000
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /login
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 1200
      imagePullSecrets:
        # NOTE: the secret has to be created at the same namespace level on which this deployment was created
        - name: dockerhub

内容提供商服务:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: webapp
    tier: frontend
spec:
  # type: LoadBalancer
  ports:
  - port: 8000
    targetPort: 8000
  selector:
    app: webapp
    tier: frontend

Nginx部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: nginxsecret
      - name: configmap-volume
        configMap:
          name: nginxconfigmap
      containers:
      - name: nginxhttps
        image: ymqytw/nginxhttps:1.5
        command: ["/home/auto-reload-nginx.sh"]
        ports:
        - containerPort: 443
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /index.html
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 1200
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.1"
          limits:
            # memory: "128Mi"
            cpu: "0.25"
        volumeMounts:
        - mountPath: /etc/nginx/ssl
          name: secret-volume
        - mountPath: /etc/nginx/conf.d
          name: configmap-volume

Nginx服务:

apiVersion: v1
kind: Service
metadata:
  name: nginxsvc
  labels:
    app: nginxsvc
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  selector:
    app: nginx

Nginx配置文件:

server {
    server_name     local.mydomain.com;
    rewrite ^(.*) https://local.mydomain.com$1 permanent;
}

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html;

        keepalive_timeout    70;
        server_name www.local.mydomain.com local.mydomain.com;
        ssl_certificate /etc/nginx/ssl/tls.crt;
        ssl_certificate_key /etc/nginx/ssl/tls.key;

        location / {
            proxy_pass  http://localhost:8000;
            proxy_connect_timeout       7200;
            proxy_send_timeout          7200;
            proxy_read_timeout          7200;
            send_timeout                7200;
    }
}

2 个答案:

答案 0 :(得分:2)

是的,服务类型ClusterIP使用kube-proxy的{​​{1}}规则以iptables方式大致均匀地分发请求。

documentation说:

  

默认情况下,后端的选择是循环的。

尽管如此,round robin请求的分发可能会受到以下内容的影响:

  1. 忙碌的后端
  2. Sticky Sessions
  3. 基于连接(如果后端pod已建立TCP会话或安全隧道,用户多次点击round robin
  4. kubernetes外的自定义主机级/节点级ClusterIP规则

答案 1 :(得分:1)

ClusterIP由kube-proxy通过iptables NAT规则的概率匹配来实现,所以是的,它在支持给定服务的pod中或多或少均匀地分发请求。

根据您的后端,这仍然可能导致不太理想的情况,其中一部分请求在其中一个后端被阻止,因为它等待大量请求完成处理。

另外,请记住,这是在连接级别完成的,因此如果您已建立连接,然后通过相同的TCP连接运行多个请求,则它不会在后端之间跳转。