GKE入口-后端服务不健康,但吊舱准备就绪

时间:2020-10-27 15:35:50

标签: kubernetes google-kubernetes-engine kubernetes-ingress

我有一个在路径readiness上运行着简单/的吊舱。该Pod由2个不同的节点端口提供服务,一个服务端口8080,另一个服务端口3000。每个节点端口均由入口引用。连接到端口3000的入口工作正常。连接端口8080的端口始终显示

所有后端服务均处于不健康状态

首先,我不需要将两个路径合并到一个入口,因为这两个端点需要两个不同的域。

因此,这里是yaml文件(为方便起见,已将其简化):

Statefulset

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "6"
    meta.helm.sh/release-name: pgwatch2
    meta.helm.sh/release-namespace: pgwatch
  creationTimestamp: "2020-10-27T08:30:41Z"
  generation: 12
  labels:
    app.kubernetes.io/instance: pgwatch2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pgwatch2
    app.kubernetes.io/version: "1.0"
    helm.sh/chart: pgwatch2-0.1.0
  name: pgwatch2
  namespace: pgwatch
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: pgwatch2
      app.kubernetes.io/name: pgwatch2
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: pgwatch2
        app.kubernetes.io/name: pgwatch2
    spec:
      containers:
      - env:
        - name: PW2_TESTDB
          value: "1"
        - name: PW2_DATASTORE
          value: postgres
        - name: PW2_WEBNOANONYMOUS
          value: "true"
        image: cybertec/pgwatch2-postgres:1.8.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        name: pgwatch2
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 9187
          name: exporter
          protocol: TCP
        - containerPort: 3000
          name: grafana
          protocol: TCP
        - containerPort: 5432
          name: database
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 400m
            memory: 512Mi
          requests:
            cpu: 400m
            memory: 512Mi
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /pgwatch2/persistent-config
          name: config-volume
        - mountPath: /var/lib/postgresql
          name: database-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: pgwatch2
      serviceAccountName: pgwatch2
      terminationGracePeriodSeconds: 30
      volumes:
      - name: config-volume
        persistentVolumeClaim:
          claimName: pgwatch2-config
      - name: database-volume
        persistentVolumeClaim:
          claimName: pgwatch2-database

节点端口

apiVersion: v1
kind: Service
metadata:
  name: pgwatch-admin-nodeport
  namespace: pgwatch
spec:
  clusterIP: 10.0.8.137
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30136
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: pgwatch2
    app.kubernetes.io/name: pgwatch2
  sessionAffinity: None
  type: NodePort

入口

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-7ad43cec-7ba3-485d-987d-2c38eb98bab2
    ingress.kubernetes.io/backends: '{"k8s-be-30136--45b6dcefab5c8dab":"UNHEALTHY"}'
    ingress.kubernetes.io/forwarding-rule: k8s2-fr-dqxcdel8-pgwatch-pgwatch-admin-ingress-jhqbfs4b
    ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-dqxcdel8-pgwatch-pgwatch-admin-ingress-jhqbfs4b
    ingress.kubernetes.io/https-target-proxy: k8s2-ts-dqxcdel8-pgwatch-pgwatch-admin-ingress-jhqbfs4b
    ingress.kubernetes.io/ssl-cert: mcrt-7ad43cec-7ba3-485d-987d-2c38eb98bab2
    ingress.kubernetes.io/target-proxy: k8s2-tp-dqxcdel8-pgwatch-pgwatch-admin-ingress-jhqbfs4b
    ingress.kubernetes.io/url-map: k8s2-um-dqxcdel8-pgwatch-pgwatch-admin-ingress-jhqbfs4b
    kubernetes.io/ingress.global-static-ip-name: pgwatch-admin
    networking.gke.io/managed-certificates: pgwatch-admin
  creationTimestamp: "2020-10-27T15:08:16Z"
  finalizers:
  - networking.gke.io/ingress-finalizer-V2
  generation: 1
  name: pgwatch-admin-ingress
  namespace: pgwatch
  resourceVersion: "27909820"
  selfLink: /apis/extensions/v1beta1/namespaces/pgwatch/ingresses/pgwatch-admin-ingress
  uid: 57984217-37a2-4827-9708-c8e9dfd20edb
spec:
  backend:
    serviceName: pgwatch-admin-nodeport
    servicePort: 8080

你知道这里怎么了吗?

谢谢。

更新

第二个入口已更改,其状态从运行变为与另一个相同:不健康的后端。因此,这可能是与广告连播相关的问题,而不仅仅是一个入口。仍在调查这里可能出什么问题。

0 个答案:

没有答案