如何使用GKE启用子域

时间:2019-06-06 23:14:45

标签: kubernetes subdomain google-kubernetes-engine

我在GKE中部署了不同的Kubernetes,我想从不同的外部子域访问它们。

我试图用子域“ sub1”和“ sub2”以及主机名“ app”创建2个部署,用主机名“ app”创建另一个部署,并在DNS上配置的IP XXX.XXX.XXX.XXX上公开该服务app.mydomain.com的

我想从sub1.app.mydomain.com和sub2.app.mydomain.com访问两个子部署

这应该是自动的,添加新的部署,每次DNS记录我都无法更改。 也许我以错误的方式来解决问题,我是GKE的新手,有什么建议吗?

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-host
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-host
        type: proxy
    spec:
      hostname: app
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-subdomain-1
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-subdomain-1
        type: app
    spec:
      hostname: app
      subdomain: sub1
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-subdomain-2
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: my-subdomain-2
        type: app
    spec:
      hostname: app
      subdomain: sub2
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - name: nginx
              containerPort: 80
              hostPort: 80
      restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
  name: my-expose-dns
spec:
  ports:
    - port: 80
  selector:
    name: my-host
  type: LoadBalancer

3 个答案:

答案 0 :(得分:1)

已解决!

这是正确的nginx配置:

file = 'non-existent-file.rb'
allow_any_instance_of(Kernel).to receive(:require).with(file).and_return(true)
expect(self).to receive(:require).with(file).and_return(true)
require file

答案 1 :(得分:0)

您想要Ingress。有几种可用的选项(Istio,nginx,traefik等)。我喜欢使用nginx,它真的很容易安装和使用。安装步骤可以在kubernetes.github.io上找到。

一旦安装了Ingress控制器,您要确保已使用type = LoadBalancer的服务公开了它。接下来,如果您使用的是Google Cloud DNS,请为您的域设置一个通配符条目,其中的A记录指向您的Ingress Controller服务的外部IP地址。您的情况是* .app.mydomain.com。

现在,您到app.mydomain.com的所有流量都将流向该负载平衡器,并由Ingress Controller处理,因此现在您需要为所需的任何服务添加Service和Ingress实体。

apiVersion: v1
kind: Service
metadata:
  name: my-service1
spec:
  selector:
    app: my-app-1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

apiVersion: v1
kind: Service
metadata:
  name: my-service2
spec:
  selector:
    app: my-app2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: sub1.app.mydomain.com
    http:
      paths:
      - backend:
          serviceName: my-service1
          servicePort: 80
  - host: sub2.app.mydomain.com
    http:
      paths:
      - backend:
          serviceName: my-service2
          servicePort: 80

所示路由是基于主机的,但是您可以像处理基于路径一样轻松地处理这些服务,因此所有app.mydomain.com/service1的流量都将流向您的一个部署。

答案 2 :(得分:0)

这可能是一个解决方案,对于我来说,我需要更动态的东西。每次添加子域时,我都不会更新入口。

使用这样的Nginx代理,我几乎解决了问题:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-subdomain-1
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-subdomain-1
            type: app
        spec:
        hostname: sub1
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        restartPolicy: Always
    status: {}
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-subdomain-2
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-subdomain-2
            type: app
        spec:
        hostname: sub2
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: nginx-config-dns-file
    data:
    nginx.conf: |
        server {
        listen       80;
        server_name ~^(?.*?)\.;

        location / {
            proxy_pass         http://$subdomain.my-internal-host;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
        }
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: my-proxy
    spec:
    replicas: 1
    strategy: {}
    template:
        metadata:
        creationTimestamp: null
        labels:
            name: my-proxy
            type: app
        spec:
        subdomain: my-internal-host
        containers:
            - image: nginx:alpine
            name: nginx
            volumeMounts:
                - name: nginx-config-dns-file
                mountPath: /etc/nginx/conf.d/default.conf.test
                subPath: nginx.conf
            ports:
                - name: nginx
                containerPort: 80
                hostPort: 80
        volumes:
            - name: nginx-config-dns-file
            configMap:
                name: nginx-config-dns-file
        restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: my-internal-host
    spec:
    selector:
        type: app
    clusterIP: None
    ports:
        - name: sk-port
        port: 80
        targetPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sk-expose-dns
    spec:
    ports:
        - port: 80
    selector:
        name: my-proxy
    type: LoadBalancer

我确实了解到我需要服务“ my-internal-host”,以允许所有部署在内部相互查看。 现在的问题只是nginx的proxy_pass,如果我用'proxy_pass http://sub1.my-internal-host;进行更改的话'它可以工作,但不适用于regexp变量。

问题与nginx解析器有关。